1、Trust in ArtificialIntelligenceA global study2023KPMG.com.auuq.edu.auCitationGillespie,N.,Lockey,S.,Curtis,C.,Pool,J.,&Akbari,A.(2023).Trust in Artificial Intelligence:A Global Study.The University of Queensland and KPMG Australia.doi:10.14264/00d3c94University of Queensland Researchers Professor Ni
2、cole Gillespie,Dr Steve Lockey,Dr Caitlin Curtis and Dr Javad Pool.The University of Queensland team led the design,conduct,analysis and reporting of this research.KPMG AdvisorsJames Mabbott,Rita Fentener van Vlissingen,Jessica Wyndham,and Richard Boele.AcknowledgementsWe are grateful for the insigh
3、tful input,expertise and feedback on this research provided by Dr Ali Akbari,Dr Ian Opperman,Rossana Bianchi,Professor Shazia Sadiq,Mike Richmond,and Dr Morteza Namvar,and members of the Trust,Ethics and Governance Alliance at The University of Queensland,particularly Dr Natalie Smith,Associate Prof
4、essor Martin Edwards,Dr Shannon Colville and Alex Macdade.FundingThis research was supported by an Australian Government Research Support Package grant provided to The University of Queensland AI Collaboratory,and by the KPMG Chair in Trust grant(ID 2018001776).Acknowledgement of CountryThe Universi
5、ty of Queensland(UQ)acknowledges the Traditional Owners and their custodianship of the lands.We pay our respects to their Ancestors and their descendants,who continue cultural and spiritual connections to Country.We recognise their valuable contributions to Australian and global society.2023 The Uni
6、versity of Queensland ABN:63 942 912 684 CRICOS Provider No:00025B.2023 KPMG,an Australian partnership and a member firm of the KPMG global organisation of independent member firms affiliated with KPMG International Limited,a private English company limited by guarantee.All rights reserved.The KPMG
7、name and logo are trademarks used under license by the independent memberfirms of the KPMG global organisation.Liability limited by a scheme approved under Professional Standards Legislation.ContentsExecutive summary 02Introduction 07How we conducted the research 081.To what extent do people trust A
8、I systems?112.How do people perceive the benefits and risks of AI?223.Who is trusted to develop,use and govern AI?294.What do people expect of the management,governance and regulation of AI?345.How do people feel about AI at work?436.How well do people understand AI?537.What are the key drivers of t
9、rust in and acceptance of AI?608.How have trust and attitudes towards AI changed over time?66Conclusion and implications 70Appendix 1:Method and statistical notes 73Appendix 2:Country samples 75Appendix 3:Key indicators for each country 77 2023 The University of Queensland ABN:63 942 912 684 CRICOS
10、Provider No:00025B.2023 KPMG,an Australian partnership and a member firm of the KPMG global organisation of independent member firms affiliated with KPMG International Limited,a private English company limited by guarantee.All rights reserved.The KPMG name and logo are trademarks used under license
11、by the independent memberfirms of the KPMG global organisation.Liability limited by a scheme approved under Professional Standards Legislation.Executive summaryArtificial Intelligence(AI)has become a ubiquitous part of everyday life and work.AI is enabling rapid innovation that is transforming the w
12、ay work is done and how services are delivered.For example,generative AI tools such as ChatGPT are having a profound impact.Given the many potential and realised benefits for people,organisations and society,investment in AI continues to grow across all sectors1,with organisations leveraging AI capa
13、bilities to improve predictions,optimise products and services,augment innovation,enhance productivity and efficiency,and lower costs,amongst other beneficial applications.However,the use of AI also poses risks and challenges,raising concerns about whether AI systems(inclusive of data,algorithms and
14、 applications)are worthy of trust.These concerns have been fuelled by high profile cases of AI use that were biased,discriminatory,manipulative,unlawful,or violated human rights.Realising the benefits AI offers and the return on investment in these technologies requires maintaining the publics trust
15、:people need to be confident AI is being developed and used in a responsible and trustworthy manner.Sustained acceptance and adoption of AI in society are founded on this trust.This research is the first to take a deep dive examination into the publics trust and attitudes towards the use of AI,and e
16、xpectations of the management and governance of AI across the globe.We surveyed over 17,000 people from 17 countries covering all global regions:Australia,Brazil,Canada,China,Estonia,Finland,France,Germany,India,Israel,Japan,the Netherlands,Singapore,South Africa,South Korea,the United Kingdom(UK),a
17、nd the United States of America(USA).These countries are leaders in AI activity and readiness within their region.Each country sample is nationally representative of the population based on age,gender,and regional distribution.We asked survey respondents about trust and attitudes towards AI systems
18、in general,as well as AI use in the context of four application domains where AI is rapidly being deployed and likely to impact many people:in healthcare,public safety and security,human resources and consumer recommender applications.The research provides comprehensive,timely,global insights into t
19、he publics trust and acceptance of AI systems,including who is trusted to develop,use and govern AI,the perceived benefits and risks of AI use,community expectations of the development,regulation and governance of AI,and how organisations can support trust in their AI use.It also sheds light on how
20、people feel about the use of AI at work,current understanding and awareness of AI,and the key drivers of trust in AI systems.We also explore changes in trust and attitudes to AI over time.Next,we summarise the key findings.2023 The University of Queensland ABN:63 942 912 684 CRICOS Provider No:00025
21、B.2023 KPMG,an Australian partnership and a member firm of the KPMG global organisation of independent member firms affiliated with KPMG International Limited,a private English company limited by guarantee.All rights reserved.The KPMG name and logo are trademarks used under license by the independen
22、t memberfirms of the KPMG global organisation.Liability limited by a scheme approved under Professional Standards Legislation.TRUST IN ARTIFICIAL INTELLIGENCE2Most people are wary about trusting AI systems and have low or moderate acceptance of AI:however,trust and acceptance depend on the AI applic
23、ationAcross countries,three out of five people(61%)are wary about trusting AI systems,reporting either ambivalence or an unwillingness to trust.Trust is particularly low in Finland and Japan,where less than a quarter of people report trusting AI.In contrast,people in the emerging economies of Brazil
24、,India,China and South Africa(BICS2)have the highest levels of trust,with the majority of people trusting AI systems.People have more faith in AI systems to produce accurate and reliable output and provide helpful services,and are more sceptical about the safety,security and fairness of AI systems a
25、nd the extent to which they uphold privacy rights.Trust in AI systems is contextual and depends on the specific application or use case.Of the applications we examined,people are generally less trusting and accepting of AI use in human resources(i.e.for aiding hiring and promotion decisions),and mor
26、e trusting of AI use in healthcare(i.e.for aiding medical diagnosis and treatment)where there is a direct benefit to them.People are generally more willing to rely on,than share information with AI systems,particularly recommender systems(i.e.for personalising news,social media,and product recommend
27、ations)and security applications (i.e.for aiding public safety and security decisions).Many people feel ambivalent about the use of AI,reporting optimism or excitement on the one hand,while simultaneously reporting worry or fear.Overall,two-thirds of people feel optimistic about the use of AI,while
28、about half feel worried.While optimism and excitement are dominant emotions in many countries,particularly the BICS countries,fear and worry are dominant emotions for people in Australia,Canada,France,and Japan,with people in France the most fearful,worried,and outraged about AI.People recognise the
29、 many benefits of AI,but only half believe the benefits outweigh the risksPeoples wariness and ambivalence towards AI can be partly explained by their mixed views of the benefits and risks.Most people(85%)believe AI results in a range of benefits,and think that process benefits such as improved effi
30、ciency,innovation,effectiveness,resource utilisation and reduced costs,are greater than the people benefits of enhancing decision-making and improving outcomes for people.However,on average,only one in two people believe the benefits of AI outweigh the risks.People in the western countries and Japan
31、 are particularly unconvinced that the benefits outweigh the risks.In contrast,the majority of people in the BICS countries and Singapore believe the benefits outweigh the risks.People perceive the risks of AI in a similar way across countries,with cybersecurity rated as the top risk globally While
32、there are differences in how the AI benefit-risk ratio is viewed,there is considerable consistency across countries in the way the risks of AI are perceived.Just under three-quarters(73%)of people across the globe report feeling concerned about the potential risks of AI.These risks include cybersecu
33、rity and privacy breaches,manipulation and harmful use,loss of jobs and deskilling,system failure,the erosion of human rights,and inaccurate or biased outcomes.In all countries,people rated cybersecurity risks as their top one or two concerns,and bias as the lowest concern.Job loss due to automation
34、 is also a top concern in India and South Africa,and system failure ranks as a top concern in Japan,potentially reflecting their relative heavy dependence on smart technology.These findings reinforce the critical importance of protecting peoples data and privacy to secure and preserve trust,and supp
35、orting global approaches and international standards for managing and mitigating AI risks across countries.There is strong global endorsement for the principles of trustworthy AI:trust is contingent on upholding and assuring these principles are in placeOur findings reveal strong global public suppo
36、rt for the principles and related practices organisations deploying AI systems are expected to uphold in order to be trusted.Each of the Trustworthy AI principles originally proposed by the European Commission3 are viewed as highly important for trust across all 17 countries,with data privacy,securi
37、ty and governance viewed as most important in all countries.This demonstrates that people expect organisations deploying AI systems to uphold high standards of:data privacy,security and governance technical performance,accuracy and robustness fairness,non-discrimination and diversity human agency an
38、d oversight transparency and explainability accountability and contestability risk and impact mitigation AI literacy support 2023 The University of Queensland ABN:63 942 912 684 CRICOS Provider No:00025B.2023 KPMG,an Australian partnership and a member firm of the KPMG global organisation of indepen
39、dent member firms affiliated with KPMG International Limited,a private English company limited by guarantee.All rights reserved.The KPMG name and logo are trademarks used under license by the independent memberfirms of the KPMG global organisation.Liability limited by a scheme approved under Profess
40、ional Standards Legislation.3TRUST IN ARTIFICIAL INTELLIGENCEPeople expect these principles to be in place for each of the AI use applications we examined(e.g.,Human Resources,Healthcare,Security,Recommender,and AI systems in general),suggesting their universal application.This strong public endorse
41、ment provides a blueprint for developing and using AI in a way that supports trust across the globe.Organisations can directly build trust and consumer willingness to use AI systems by supporting and implementing assurance mechanisms that help people feel confident these principles are being upheld.
42、Three out of four people would be more willing to trust an AI system when assurance mechanisms are in place that signal ethical and responsible use,such as monitoring system accuracy and reliability,independent AI ethics reviews,AI ethics certifications,adhering to standards,and AI codes of conduct.
43、These mechanisms are particularly important given the current reliance on industry regulation and governance in many jurisdictions.People are most confident in universities and defence organisations to develop,use and govern AI and least confident in government and commercial organisationsPeople hav
44、e the most confidence in their national universities and research institutions,as well as their defence organisations,to develop,use and govern AI in the best interest of the public(7682%confident).In contrast,they have the least confidence in governments and commercial organisations to do this.A th
45、ird of people lack confidence in government and commercial organisations to develop,use and regulate AI.This is problematic given the increasing scope with which governments and commercial organisations are using AI,and the publics expectation that these entities will responsibly govern and regulate
46、 its use.An implication is that government and business can partner with more trusted entities in the use and governance of AI.There are significant differences across countries in peoples trust of their government to use and govern AI,with about half of people lacking confidence in their government
47、 in South Africa,Japan,the UK and the USA,whereas the majority in China,India and Singapore have high confidence in their government.This pattern mirrors peoples general trust in their governments:we found a strong association between peoples general trust in government,commercial organisations and
48、other institutions and their confidence in these entities to use and govern AI.These findings suggest that taking action to strengthen trust in institutions generally is an important foundation for trust in specific AI activities.People expect AI to be regulated with some form of external,independen
49、t oversight,but view current regulations and safeguards as inadequateThe large majority of people(71%)expect AI to be regulated.With the exception of India,the majority in all other countries see regulation as necessary.This finding corroborates prior surveys4 indicating strong desire for regulation
50、 of AI and is not surprising given most people(61%)believe the long-term impact of AI on society is uncertain and unpredictable.People are broadly supportive of multiple forms of regulation,including regulation by government and existing regulators,a dedicated independent AI regulator,and co-regulat
51、ion and industry regulation,with general agreement of the need for some form of external,independent oversight.Despite the strong expectations of AI regulation,only two in five people believe current regulations,laws and safeguards are sufficient to make AI use safe.This aligns with previous surveys
52、5 showing public dissatisfaction with the regulation of AI,and is problematic given the strong relationship between current safeguards and trust in AI demonstrated by our modelling.This highlights the importance of strengthening and communicating the regulatory and legal framework governing AI and d
53、ata privacy.There are,however,substantial country differences,with people in India and China most likely to believe appropriate safeguards are in place(7480%agree)followed by Brazil and Singapore(5253%).In contrast,people in Japan and South Korea are the least convinced(1317%agree)as are the majorit
54、y of people in western countries.These differences in the perceived adequacy of regulations may partly explain the higher trust and acceptance of AI among people in the BICS countries.Most people are comfortable with the use of AI to augment work and inform managerial decision-making,but want humans
55、 to retain control Most people are comfortable with the use of AI at work to augment and automate tasks,but are less comfortable when AI is focused on them as employees,for example for HR and people management(e.g.to monitor and evaluate employees,and support recruitment).On average,half the people
56、are willing to trust AI at work,for example by relying on the output it provides.People in Australia,Canada,France and Germany are the least comfortable with the use of AI at work,while those in the BICS countries and Singapore are the most comfortable.2023 The University of Queensland ABN:63 942 91
57、2 684 CRICOS Provider No:00025B.2023 KPMG,an Australian partnership and a member firm of the KPMG global organisation of independent member firms affiliated with KPMG International Limited,a private English company limited by guarantee.All rights reserved.The KPMG name and logo are trademarks used u
58、nder license by the independent memberfirms of the KPMG global organisation.Liability limited by a scheme approved under Professional Standards Legislation.TRUST IN ARTIFICIAL INTELLIGENCE4Most people view AI use in managerial decision-making as acceptable,and actually prefer AI involvement to sole
59、human decision-making.However,the preferred option is either a 25%-75%or 50%-50%AI-human collaboration,with humans retaining more or equal control.This indicates a clear preference for AI to be used as a decision aid,and a lack of support for fully automated AI decision-making at work.While about ha
60、lf believe AI will enhance their competence and autonomy at work,less than one in three people believe AI will create more jobs than it will eliminate.However,most managers believe the opposite that AI will create jobs.This reflects a broader trend of managers being more comfortable,trusting and sup
61、portive of AI use at work than other employees,with manual workers the least comfortable and trusting of AI at work.Given managers are typically the drivers of AI adoption in organisations,these differing views may cause tensions in the implementation of AI at work.A minority of people in western co
62、untries,Japan and South Korea report that their employing organisation invests in AI adoption,recognises efforts to integrate AI,or supports the responsible use of AI.This stands in contrast to a majority of people in the BICS countries and Singapore.People want to learn more about AI but currently
63、have low understanding While 82%of people are aware of AI,one in two people report feeling they do not understand AI or when and how it is used.Understanding of AI is highest in China,India,South Korea,and Singapore.Two out of five people are unaware that AI enables common applications they use.For
64、example,even though 87%of people use social media,45%do not know AI is used in social media.People who better understand AI are more likely to trust and accept it,and perceive greater benefits of AI use.This suggests understanding AI sets a foundation for trust.Most people across all countries(82%)w
65、ant to know more about AI.Considered together,these findings suggest a strong need and appetite for public education on AI.Younger generations,the university educated and managers are more trusting,accepting and generally hold more positive attitudes towards AIYounger generations,the university educ
66、ated,and managers show a consistent and distinctly more positive orientation towards AI across the findings,compared to older generations,those without a university education and non-managers.They are more trusting and accepting of AI systems,including their use at work,and are more likely to feel p
67、ositive about AI and report using it.They have greater knowledge of AI and are better able to identify when AI is used,and have greater interest in learning about AI.They perceive more benefits of AI,but remain the same as other groups in their perceptions of the risks of AI.They are more likely to
68、believe AI will create jobs,but also more aware that AI can perform key aspects of their work.They are more confident in entities to develop,use and govern AI,and more likely to believe that current safeguards are sufficient to make AI use safe.It is noteworthy that we see very few meaningful differ
69、ences across gender in trust and attitudes towards AI.There are stark differences in trust and attitudes across countries:people in the emerging economies of Brazil,India,China,and South Africa are more trusting and accepting of AI and have more positive attitudes towards AI A key insight from the s
70、urvey is the stark differences in trust,attitudes and use of AI between people in the emerging economies of Brazil,India,China and South Africa and those in other countries.People in the emerging economies are more trusting and accepting of AI and hold more positive feelings and attitudes towards AI
71、 than people in other countries.These differences held even when controlling for the effects of age and education.Singapore followed this positive orientation on several indicators,particularly comfort,trust and familiarity with the use of AI at work,adequacy of current AI regulation and governance,
72、confidence in companies to use and govern AI,and the belief that AI will create jobs.Our data suggests that this high trust is not blind to the risks.People in BICS countries and Singapore did not perceive the risks of AI,or the uncertain impact of AI on society,any lower than people in other countr
73、ies.Nor did they differ from other countries on the importance of the principles and practices required to ensure AI is trustworthy.Rather,a key differentiator is that most people in the BICS countries and Singapore believe the benefits of AI outweigh the risks,whereas a minority of people in wester
74、n countries,such as Australia,Canada,France,Netherlands,the UK and USA,hold this view.The higher trust and more positive attitudes in the BICS countries is likely due to the greater benefits afforded by technological advances and deployment in emerging economies and the increasingly important econom
75、ic role of AI technologies in these countries.2023 The University of Queensland ABN:63 942 912 684 CRICOS Provider No:00025B.2023 KPMG,an Australian partnership and a member firm of the KPMG global organisation of independent member firms affiliated with KPMG International Limited,a private English
76、company limited by guarantee.All rights reserved.The KPMG name and logo are trademarks used under license by the independent memberfirms of the KPMG global organisation.Liability limited by a scheme approved under Professional Standards Legislation.5TRUST IN ARTIFICIAL INTELLIGENCEThis may encourage
77、 a growth mindset that motivates acceptance and use of technology as a means to accelerate economic progress,prosperity,and quality of life.An implication is that these countries may be uniquely positioned to rapidly accelerate innovation and technological advantage through AI.It is notable,however,
78、that on international rankings these countries rank low on governance and regulation frameworks to ensure the ethical and responsible use of AI compared to western countries.6AI awareness,understanding and trust in AI has increased over time,but institutional safeguards continue to lagWe had the opp
79、ortunity to examine how trust and select attitudes to AI compared with our 2020 Trust in AI survey data,which was based on representative sampling from five western countries(Australia,Canada,Germany,the UK and the USA).7 Comparisons were made between data from these five countries in 2020 and 2022
80、using equivalent measures over time.This comparison suggests that trust in AI systems has increased in these countries over time,as has awareness of AI and understanding of AI use in common applications.However,there has been no increase in the perceived adequacy of institutional safeguards,such as
81、regulation and laws to protect people from problems,despite most people in these countries perceiving such institutional safeguards as insufficient in 2020.Similarly,there was no increase in peoples confidence in government and business to develop,use or regulate AI,despite low levels of confidence
82、in these entities.There was,however,an increase in the view that AI regulation is needed in two countries-the UK and USA.These findings suggest the institutional safeguards governing AI are not keeping pace with expectations and technological uptake.In some jurisdictions,these findings may reflect a
83、 lack of communication and awareness of regulatory change.Trust is central to the acceptance of AI and is influenced by four key driversOur analysis demonstrated that trust strongly influences AI acceptance,and hence is critical to the sustained societal adoption and support of AI.8 Our modelling id
84、entified four distinct pathways to trust,which represent key drivers that influence peoples trust in AI systems:1.an institutional pathway reflecting beliefs about the adequacy of current safeguards,regulations and laws to make AI use safe,and confidence in government and commercial organisations to
85、 develop,use and govern AI 2.a motivational pathway reflecting the perceived benefits of AI use 3.an uncertainty reduction pathway reflecting the need to address concerns about the risks associated with AI 4.a knowledge pathway reflecting peoples understanding of AI use and efficacy in using technol
86、ogy Of these drivers,the institutional pathway had the strongest influence on trust,followed by the motivational pathway.These findings highlight the importance of developing adequate governance and regulatory mechanisms that safeguard people from the risks associated with AI use and public confiden
87、ce in entities to enact these safeguards,as well as ensuring AI is designed and used in a human-centric way to benefit people and support their understanding.Pathways to strengthen public trust and acceptanceCollectively,the survey insights provide evidence-based pathways for strengthening the trust
88、worthy and responsible use of AI systems,and the trusted adoption of AI in society.These insights are relevant for informing responsible AI strategy,practice and policy within business,government,and NGOs at a national level,as well as informing AI guidelines,standards and policy at the internationa
89、l and pan-governmental level.There are a range of resources available to support organisations to embed the principles and practices of trustworthy AI into their everyday operations and put in place mechanisms that support stakeholder trust in the use of AI.9 While proactively investing in these tru
90、st foundations can be time and resource intensive,our research suggests it is critical for sustained acceptance and adoption of smart technologies over time,and hence a return on investment.2023 The University of Queensland ABN:63 942 912 684 CRICOS Provider No:00025B.2023 KPMG,an Australian partner
91、ship and a member firm of the KPMG global organisation of independent member firms affiliated with KPMG International Limited,a private English company limited by guarantee.All rights reserved.The KPMG name and logo are trademarks used under license by the independent memberfirms of the KPMG global
92、organisation.Liability limited by a scheme approved under Professional Standards Legislation.TRUST IN ARTIFICIAL INTELLIGENCE6In the next section,we provide an overview of the research methodology used.In the concluding section,we draw on the survey insights to identify evidence-based pathways for s
93、trengthening the trusted and responsible use of AI systems,and discuss the implications for industry,government,universities,and non-government organisations(NGOs).IntroductionAI is rapidly becoming a ubiquitous part of everyday life and continuing to transform the way we live and work.10 All sector
94、s of the global economy are now embracing AI,with AI applications expanding and diversifying into domains ranging from transport,crop and service optimisation,the diagnosis and treatment of diseases,and the protection of physical,financial,and cyber security,for example by fining distracted drivers,
95、detecting credit card fraud,identifying children at risk,and enabling facial recognition.While the benefits and promise of AI for society and business are undeniable,so too are the risks and challenges.These include the risk of codifying and reinforcing unfair biases,infringing on human rights such
96、as privacy,spreading fake online content,deskilling and technological unemployment,and the risks stemming from mass surveillance technologies,critical AI failures and autonomous weapons.Even in cases where AI is developed to help people(e.g.to protect cybersecurity),there is the risk it can be used
97、maliciously(e.g.for cyberattacks).These issues are causing public concern and raising questions about the trustworthiness and governance of AI systems.11 The publics trust in AI technologies is vital for continual acceptance.If AI systems do not prove worthy of trust,their widespread acceptance and
98、adoption will be hindered,and the potential societal and economic benefits will not be fully realised.12 Despite the central importance of trust,to date little is known about how trust in AI is experienced by people in different countries across the globe,or what influences this trust.13 In 2020,we
99、conducted the first deep dive survey examining trust in AI systems across five western countries Australia,Canada,Germany,the UK and the USA(Gillespie,Lockey&Curtis,2021).The current study extends this focus on trust in AI by examining the perspectives of people representing 17 countries drawn from
100、all global regions:the original five western countries in addition to Brazil,China,Estonia,Finland,France,India,Israel,Japan,Netherlands,Singapore,South Africa,and South Korea.Our research aims to understand and quantify peoples trust in and attitudes towards AI,benchmark these attitudes over time,a
101、nd explore similarities and differences across countries.Taking a global perspective is important given AI systems are not bounded by physical borders and are rapidly being deployed and used across the globe.Our report is structured to provide evidence-based insights on the following questions about
102、 the publics trust and acceptance of AI:To what extent do people trust AI systems?How do people perceive the benefits and risks of AI?Who is trusted to develop,use and govern AI?What expectations do people have about the development,governance and regulation of AI?How do people feel about the use of
103、 AI at work?How well do people understand AI?What are the key drivers of trust and acceptance of AI?How have trust and attitudes towards AI changed over time?2023 The University of Queensland ABN:63 942 912 684 CRICOS Provider No:00025B.2023 KPMG,an Australian partnership and a member firm of the KP
104、MG global organisation of independent member firms affiliated with KPMG International Limited,a private English company limited by guarantee.All rights reserved.The KPMG name and logo are trademarks used under license by the independent memberfirms of the KPMG global organisation.Liability limited b
105、y a scheme approved under Professional Standards Legislation.TRUST IN ARTIFICIAL INTELLIGENCE7How we conducted the researchWe collected data in each country using representative research panels.14 This approach is common in survey research to recruit people who are representative of a national popul
106、ation.Panel members were invited to complete the survey online,with data collected between September and October 2022.The total sample included 17,193 respondents from 17 countries.We chose the countries based on three criteria:1)representation across all nine global regions15;2)leadership in AI act
107、ivity and readiness16,and 3)diversity on the Responsible AI Index.17 The sample size across countries ranged from 1,001 to 1,021 respondents.Surveys were conducted in the native language(s)of each country,with the option to complete in English,if preferred.To ensure question equivalence across count
108、ries,surveys were professionally translated and back-translated from English to each respective language,using separate translators.See Appendix 1 for further method details.Who completed the survey?Country samples were nationally representative of the adult population on gender,age and regional dis
109、tribution matched against official national statistics within each country.Across the total sample,the gender balance was 50%women,49%men and 1%non-binary and other gender identities.The mean age was 44 years and ranged from 18 to 91 years.Ninety percent of respondents were either currently employed
110、(67%)or had prior work experience(23%).These respondents represented the full diversity of industries and occupational groups listed by the OECD.18 Almost half the sample(49%)had a university education.Further details of the sample representativeness,including the demographic profile for each countr
111、y sample,are shown in Appendix 2.17 countries17,193 respondentsAustraliaIsraelBrazilJapanCanadaNetherlandsChinaSingaporeEstoniaSouth AfricaFinlandSouth KoreaFranceUnited KingdomGermanyUnited StatesIndiaTRUST IN ARTIFICIAL INTELLIGENCE8 2023 The University of Queensland ABN:63 942 912 684 CRICOS Prov
112、ider No:00025B.2023 KPMG,an Australian partnership and a member firm of the KPMG global organisation of independent member firms affiliated with KPMG International Limited,a private English company limited by guarantee.All rights reserved.The KPMG name and logo are trademarks used under license by t
113、he independent memberfirms of the KPMG global organisation.Liability limited by a scheme approved under Professional Standards Legislation.Age GroupOccupation15%33%24%Generation Z(18 25)Millennial(26 41)28%Generation X(42 57)Baby Boomer+Silent Generation(58 91)Gender50%31%13%49%24%13%WomenProfession
114、al&Skilled (including army)Service&SalesMenManagerManualNon-binary&other gendersAdministrativeEducation1%19%23%4%24%35%14%Postgraduate degree Undergraduate degree Vocational or trade qualificationUpper secondary schoolLower secondary school or lessHow we asked about AIAfter asking about respondents
115、understanding of AI,the following definition of AI was provided.Artificial Intelligence(AI)refers to computer systems that can perform tasks or make predictions,recommendations or decisions that usually require human intelligence.AI systems can perform these tasks and make these decisions based on o
116、bjectives set by humans but without explicit human instructions(OECD,2019).2023 The University of Queensland ABN:63 942 912 684 CRICOS Provider No:00025B.2023 KPMG,an Australian partnership and a member firm of the KPMG global organisation of independent member firms affiliated with KPMG Internation
117、al Limited,a private English company limited by guarantee.All rights reserved.The KPMG name and logo are trademarks used under license by the independent memberfirms of the KPMG global organisation.Liability limited by a scheme approved under Professional Standards Legislation.TRUST IN ARTIFICIAL IN
118、TELLIGENCE9Security AI An AI system used to help identify suspicious or criminal behaviour,security threats,and people of interest to police.It works by processed and analysing a range of information such as face and fingerprint scans collected in places like country borders,and photos and live foot
119、age of people and vehicles in public places collected through security cameras.Police and security agencies use Security AI to inform decisions about public safety and security.Human Resources AI An AI system used to help select the most suitable applicants for a job,identify workers who are most li
120、kely to perform well in a job,and predict who is most likely to quit.It works by collecting and comparing worker characteristics,employee data,and performance over time,and analysing which qualities are related to better job performance and job retention.Managers use Human Resources AI to inform dec
121、isions about hiring and promotion.Recommender AI An AI system used to personalise services such as news,social media content and product recommendations by providing content and products that are most relevant to the user.It works by predicting a persons choices and preferences based on their charac
122、teristics(e.g.age,gender,location),past behaviour,interests or preferences,and the behaviour of similar users.Companies use Recommender AI to tailor services to consumers.Healthcare AI An AI system used to improve the diagnosis of disease(e.g.cancer),inform the best treatment options,and predict hea
123、lth outcomes based on patient data.It works by comparing a patients health data(e.g.symptoms,test and scan results,medical history,family history,age,weight and gender etc.)to large datasets based on many patients.Doctors use Healthcare AI to inform decisions about patient diagnosis and treatment.Gi
124、ven perceptions of AI systems can be influenced by the purpose and use case,19 survey questions asking about trust,attitudes and governance of AI systems referred to one of five AI use cases(randomly allocated):Healthcare AI(used to inform decisions about how to diagnose and treat patients),Security
125、 AI(used to inform decisions about public safety and security),Human Resource AI(used to inform decisions about hiring and promotion),Recommender AI(used to tailor services to consumers),or AI in general(i.e.AI systems in general).These use cases were chosen as they represent domains where AI is bei
126、ng rapidly deployed and is likely to be used by,or impact,many people.Before answering questions,respondents were provided with a description of the AI use case,including what it is used for,what it does and how it works.These descriptions are shown below,and were developed based on current in use s
127、ystems and input from domain experts working in healthcare,security,human resources,and recommender systems,respectively.How we analysed the dataWe conducted statistical analyses to examine differences between countries,AI use cases,and demographic groups.Where significant and meaningful differences
128、 are evident between countries,we report country-level data.Further details of the statistical procedures are discussed in Appendix 1.We also report meaningful differences between groups and AI use cases.2023 The University of Queensland ABN:63 942 912 684 CRICOS Provider No:00025B.2023 KPMG,an Aust
129、ralian partnership and a member firm of the KPMG global organisation of independent member firms affiliated with KPMG International Limited,a private English company limited by guarantee.All rights reserved.The KPMG name and logo are trademarks used under license by the independent memberfirms of th
130、e KPMG global organisation.Liability limited by a scheme approved under Professional Standards Legislation.TRUST IN ARTIFICIAL INTELLIGENCE1011TRUST IN ARTIFICIAL INTELLIGENCETo what extent do people trust AI systems?TOPIC ONE 2023 The University of Queensland ABN:63 942 912 684 CRICOS Provider No:0
131、0025B.2023 KPMG,an Australian partnership and a member firm of the KPMG global organisation of independent member firms affiliated with KPMG International Limited,a private English company limited by guarantee.All rights reserved.The KPMG name and logo are trademarks used under license by the indepe
132、ndent memberfirms of the KPMG global organisation.Liability limited by a scheme approved under Professional Standards Legislation.To answer this question,we asked respondents how much they trust and accept a range of AI systems,and the extent to which they perceive them to be trustworthy.We also ask
133、ed people how they feel about AI.We define trust in AI as a willingness to accept vulnerability to an AI system(e.g.by relying on system recommendations or output,or sharing data)based upon positive expectations of how the system will operate(e.g.accuracy,helpfulness,data privacy and security).2012T
134、RUST IN ARTIFICIAL INTELLIGENCE 2023 The University of Queensland ABN:63 942 912 684 CRICOS Provider No:00025B.2023 KPMG,an Australian partnership and a member firm of the KPMG global organisation of independent member firms affiliated with KPMG International Limited,a private English company limite
135、d by guarantee.All rights reserved.The KPMG name and logo are trademarks used under license by the independent memberfirms of the KPMG global organisation.Liability limited by a scheme approved under Professional Standards Legislation.Most people are ambivalent or unwilling to trust AI systems Three
136、 out of five people(61%)across countries are wary about trusting in AI systems,reporting either ambivalence or an unwillingness to trust(see Figure 1).In contrast,39%report that they are willing to trust AI systems.Mirroring these findings,most people(67%)report low to moderate acceptance of AI.Only
137、 a third of people across countries report high acceptance.There is a strong association between trust in AI and acceptance of AI(correlation r=0.71,p0.001).There are stark differences across countries:AI is most trusted and accepted in the emerging BICS economies of Brazil,India,China,and South Afr
138、icaOur survey revealed stark differences in trust and acceptance of AI systems across countries.Figure 2 shows trust in AI is highest in India,China,Brazil,and South Africa,respectively.These countries are each part of the BRICS alliance of major emerging economies.We use the acronym BICS in this re
139、port to denote the four countries of Brazil,India,China,and South Africa included in our survey that showed a distinctively different pattern of findings to the other countries.In the BICS countries,most people(5675%)trust in AI systems,with people in India reporting the greatest willingness to trus
140、t,followed by China.In contrast,a minority of people in other countries report trusting AI,with the Finnish reporting the lowest trust(only 16%).We see a similar pattern for acceptance of AI as we do for trust.The BICS countries are notably higher in their acceptance of AI,with 4867%of people in the
141、se countries reporting high acceptance.Again,India and China lead the way,with 6667%reporting high acceptance of AI,compared to only 18%in the Netherlands and Canada,respectively.There are low levels of AI acceptance across all Western countries,with Germany reporting the most acceptance(35%high acc
142、eptance).The higher trust and acceptance of AI in the BICS countries is likely due to the accelerated uptake of AI in these countries,and the increasingly important economic role of emerging technologies.21 As discussed in the forthcoming sections of this report,people in the BICS countries are the
143、most positive about AI,perceive the most benefits from it,and report the highest levels of AI adoption and use at work.Figure 1.Willingness to trust and accept AI systemsFigure 1.Willingness to trust and accept AI systemsHow willing are you to trust AI specific application?8 itemsTo what extent do y
144、ou accept the use of AI specific application?3 items%Unwilling to trust%Ambivalent%Willing to trustTrustAcceptance293239293833%Unwilling=Somewhat unwilling,Unwilling,or Completely unwilling%Ambivalent=Neither willing nor unwilling%Willing=Somewhat willing,Willing,or Completely willing%Low acceptance
145、=Not at all or Slightly%Moderate acceptance=Moderately%High acceptance=Highly or Completely%Low acceptance%Moderate%High acceptance 2023 The University of Queensland ABN:63 942 912 684 CRICOS Provider No:00025B.2023 KPMG,an Australian partnership and a member firm of the KPMG global organisation of
146、independent member firms affiliated with KPMG International Limited,a private English company limited by guarantee.All rights reserved.The KPMG name and logo are trademarks used under license by the independent memberfirms of the KPMG global organisation.Liability limited by a scheme approved under
147、Professional Standards Legislation.TRUST IN ARTIFICIAL INTELLIGENCE13Trust and acceptance depend on the application:AI use in human resources is the least trusted and accepted Peoples trust and acceptance of AI depends on the specific application or use case.There is a tendency for people to trust t
148、he use of AI in human resources(HR)the least(34%willing,M=3.9;see Figure 3),and the use of AI in healthcare diagnosis and treatment the most(44%willing,M=4.3).This difference likely reflects the important direct benefit that increased precision of medical diagnosis and treatments affords people,comb
149、ined with the high levels of trust in doctors in most countries.Figure 3.Trust in AI systems across applications%Unwilling%Ambivalent%WillingHealthcare AISecurity AIAI in generalRecommender AIHR AI31283240273439303436353134%Unwilling=Somewhat,Mostly or Completely unwilling to trust%Ambivalent=Neithe
150、r willing nor unwilling to trust%Willing=Somewhat,Mostly or Completely willing to trust4425How willing are you to trust AI specific application?8 itemsFigure 2:Willingness to trust and accept AI systems across countries%Willing to trust%Willing to accept%Willing to trust=Somewhat willing,Mostly will
151、ing,or Completely willing%Willing to accept=Highly accept or Completely accept4024United States3218Canada3535Germany3123France26 26Estonia2319Japan4533Singapore7567India5654Brazil3423Australia3420United Kingdom2918Netherlands3435Israel5748South Africa6766China3132South Korea1623Finland 2023 The Univ
152、ersity of Queensland ABN:63 942 912 684 CRICOS Provider No:00025B.2023 KPMG,an Australian partnership and a member firm of the KPMG global organisation of independent member firms affiliated with KPMG International Limited,a private English company limited by guarantee.All rights reserved.The KPMG n
153、ame and logo are trademarks used under license by the independent memberfirms of the KPMG global organisation.Liability limited by a scheme approved under Professional Standards Legislation.TRUST IN ARTIFICIAL INTELLIGENCE14This difference between applications is meaningful in some countries,but not
154、 all.Specifically,people in eight countries Germany,the Netherlands,Finland,Estonia,Israel,South Korea,Japan,and China report lower trust and acceptance of Human Resources AI,than either Healthcare AI or AI in general(see Figure 4).Figure 4.Trust in AI systems across countriesAI in generalHR AIHealt
155、hcare AISecurity AIRecommender AIIndiaChinaBrazilSouth AfricaSingaporeGermanyIsraelSouth KoreaUnited StatesEstoniaNetherlandsCanadaUnited KingdomAustraliaJapanFranceFinland33.23.43.63.844.24.44.64.855.25.45.6*Mean trust in AI application on 7 point scaleCountries sorted in order of Healthercare AI 2
156、023 The University of Queensland ABN:63 942 912 684 CRICOS Provider No:00025B.2023 KPMG,an Australian partnership and a member firm of the KPMG global organisation of independent member firms affiliated with KPMG International Limited,a private English company limited by guarantee.All rights reserve
157、d.The KPMG name and logo are trademarks used under license by the independent memberfirms of the KPMG global organisation.Liability limited by a scheme approved under Professional Standards Legislation.TRUST IN ARTIFICIAL INTELLIGENCE15Figure 5.Willingness to rely on and share information with AI sy
158、stemsPeople in general are more willing to rely on,rather than share information with AI systems(see Figure 5).People are particularly more willing to rely on Security and Recommender AI applications than share information with them.However,this pattern is reversed for Healthcare AI,where respondent
159、s are usually more willing to share information than rely on the outcomes of the application.This most likely reflects that sharing information with healthcare providers and systems is normal and routine to facilitate effective care.How willing are you to:rely on information provided by specific AI
160、application/shareinformation with specific AI application?%Unwilling%Neutral%WillingHealthcare AIRely on outputShare informationSecurity AIRely on outputShare informationAI in generalRely on outputShare informationRecommender AIRely on outputShare informationHR AIRely on outputShare information28343
161、82624502133463528372135443330372335423830323133363826368 items%Unwilling=Somewhat unwilling,Unwilling,or Completely unwilling%Neutral=Neither willing nor unwilling%Willing=Somewhat willing,Willing,or Completely willingPeople are more willing to rely on than share information with AI systems,particul
162、arly with security and recommender systemsWe drilled down to examine two key ways people demonstrate trust in AI systems:reliance and information sharing.Reliance Assesses peoples willingness to rely on an AI systems output,such as a recommendation or decision(i.e.to trust that it is accurate).If pe
163、ople are not willing to rely on AI system output,the system will not be used.Information sharing Relates to the willingness to share information or data with an AI system(i.e.to provide data to enable the system to work or perform a service for you).All AI systems are trained on large databases,but
164、only some require the specific user to share information as input to function.2023 The University of Queensland ABN:63 942 912 684 CRICOS Provider No:00025B.2023 KPMG,an Australian partnership and a member firm of the KPMG global organisation of independent member firms affiliated with KPMG Internat
165、ional Limited,a private English company limited by guarantee.All rights reserved.The KPMG name and logo are trademarks used under license by the independent memberfirms of the KPMG global organisation.Liability limited by a scheme approved under Professional Standards Legislation.TRUST IN ARTIFICIAL
166、 INTELLIGENCE16AI systems are perceived as more trustworthy in BICS countriesTo understand how people view the trustworthiness of AI systems,we asked people about three key components of trustworthiness:ability,humanity and integrity.Humanity AI systems are designed to deliver beneficial outcomes fo
167、r people and society,and have a positive impact.Ability AI systems are fit-for-purpose and perform reliably to produce accurate output as intended.Integrity AI systems are safe and secure to use and adhere to commonly accepted ethical principles(e.g.fairness,do no harm),human rights(e.g.privacy)and
168、applicable laws.2023 The University of Queensland ABN:63 942 912 684 CRICOS Provider No:00025B.2023 KPMG,an Australian partnership and a member firm of the KPMG global organisation of independent member firms affiliated with KPMG International Limited,a private English company limited by guarantee.A
169、ll rights reserved.The KPMG name and logo are trademarks used under license by the independent memberfirms of the KPMG global organisation.Liability limited by a scheme approved under Professional Standards Legislation.TRUST IN ARTIFICIAL INTELLIGENCE17As shown in Figure 6,we see a similar pattern a
170、cross countries in beliefs about the trustworthiness of AI systems,as for AI trust and acceptance.People in the BICS countries hold much more positive beliefs about the trustworthiness of AI systems,compared to all other countries,with 7993%viewing these systems as trustworthy.Indians again have the
171、 most positive views,with 93%agreeing that AI systems are trustworthy,followed by the Chinese(87%).In contrast,people in western countries,as well as Japan,have the least favourable beliefs about the trustworthiness of AI,ranging from 4958%viewing AI systems as trustworthy.On average,63%of people ac
172、ross all countries and applications perceive AI systems are trustworthy.Perceptions of trustworthiness are typically higher than trusting intentions because trust involves risk and vulnerability(e.g.by relying on AI output or sharing information with an AI system),whereas perceiving a system as trus
173、tworthy does not.There is a strong association between perceived trustworthiness and trust in AI systems(r=0.77,p0.001).In line with the findings for trust and acceptance,we found differences in trustworthiness across applications.In most countries,Human Resources AI is seen as less trustworthy than
174、 other AI applications.Figure 6.Perceptions of the trustworthiness of AI systemsPerceived trustworthiness(%Agree)Ability(%agree)Humanity(%agree)Integrity(%agree)IndiaChinaBrazilSouth AfricaSingaporeIsraelEstoniaSouth KoreaJapanUnited KingdomUnited StatesCanadaGermanyFranceNetherlandsFinlandAustralia
175、3035404550556065707580859095%Agree=Somewhat agree,Agree,or Strongly agreeOrder of countries sorted by Humanity category I believe specific AI application would:produce output that is accurate(ability)/have a positive impact on most people(humanity)/be safe and secure to use(integrity)14 items 2023 T
176、he University of Queensland ABN:63 942 912 684 CRICOS Provider No:00025B.2023 KPMG,an Australian partnership and a member firm of the KPMG global organisation of independent member firms affiliated with KPMG International Limited,a private English company limited by guarantee.All rights reserved.The
177、 KPMG name and logo are trademarks used under license by the independent memberfirms of the KPMG global organisation.Liability limited by a scheme approved under Professional Standards Legislation.TRUST IN ARTIFICIAL INTELLIGENCE18More people believe AI systems are capable and beneficial than believ
178、e they are safe and designed to uphold ethical principles and rightsAs shown in Figure 6,people have most faith in the ability of AI systems to produce accurate and reliable output and provide helpful,beneficial services for people.In contrast,people are more sceptical about the extent to which AI s
179、ystems are safe and secure to use and adhere to commonly accepted ethical principles(e.g.fairness,do no harm)and privacy rights(integrity).This was a consistent pattern across countries,with the exception of China and India.For example,in Japan,most people(5564%)view AI systems as technically compet
180、ent(M=4.6/7)and beneficial(M=4.9/7),however,only 34%agree that AI systems uphold ethical principles and rights(integrity M=4.1/7).This difference between AI integrity as compared to ability and humanity is evident for AI in general and two specific applications AI use for security and AI recommender
181、 systems.There are no meaningful differences between perceptions of AI ability,humanity and integrity for Healthcare AI or Human Resources AI.Most people are optimistic and excited about AI:however,many also feel worried and fearfulWe asked people the extent to which they feel a range of emotions ab
182、out the AI applications.A majority of people report positive emotions such as feeling optimistic,excited,or relaxed about these AI systems.However,just under half the people also report feeling worried or fearful about the AI applications,and just under a quarter feel outraged(see Figure 7).People w
183、ho have positive emotions towards an AI system are more likely to also trust in AI,as demonstrated by the strong,positive correlation(r=0.68,p0.001).In contrast,when people feel negative emotions towards AI,this is associated with lower trust(r=-0.28,p0.001).Further analysis22 revealed people common
184、ly experience ambivalent feelings towards AI:41%experience both high positive and negative emotions,for example feeling excited but also worried about AI.In contrast,35%experience high positive emotions coupled with low negative emotions,and 16%have low positive emotions coupled with high negative e
185、motions.Only 8%report feeling low positive and negative emotions towards AI.Figure 7.Emotions associated with AIFigure 7.Emotions associated with AIIn thinking about AI specific application to what extent do you feel%Moderate to High67%60%57%47%48%24%OptimisticExcitedRelaxedFearfulWorriedOutraged*5
186、point scale%Moderate to High=Moderately,Very,or Extremely 2023 The University of Queensland ABN:63 942 912 684 CRICOS Provider No:00025B.2023 KPMG,an Australian partnership and a member firm of the KPMG global organisation of independent member firms affiliated with KPMG International Limited,a priv
187、ate English company limited by guarantee.All rights reserved.The KPMG name and logo are trademarks used under license by the independent memberfirms of the KPMG global organisation.Liability limited by a scheme approved under Professional Standards Legislation.TRUST IN ARTIFICIAL INTELLIGENCE19Peopl
188、e in the BICS countries feel most optimistic,excited and relaxed about AIFigure 8 shows how people feel about AI in each country,ordered by the countries with people who felt most positive about AI.People in the BICS countries are the most optimistic,excited,and relaxed about AI,and people in Japan
189、the least.Positive emotions were significantly stronger than negative emotions in the BICS countries as well as in Estonia,Finland,and Israel.Fear or worry about AI was the dominant emotion experienced by people in Australia,Canada,France,and Japan,with people in France amongst the most fearful and
190、worried.People generally did not feel much outrage towards AI.While people in India are most likely to feel positive emotions about AI,they also have one of the highest levels of fear and are more likely to report outrage than other countries.This reinforces that in many countries,fear and worry abo
191、ut AI often coincides with optimism or excitement.Figure 8.Emotions towards AI across countries Figure 8.Emotions towards AI across countriesIn thinking about AI specific application to what extent do you feel OptimisticExcitedRelaxedFearfulWorriedOutragedIndiaChinaBrazilSouth AfricaIsraelSingaporeE
192、stoniaGermanyFranceFinlandSouth KoreaUnited StatesNetherlandsCanadaAustraliaJapanUnited Kingdom1.21.41.61.822.22.42.62.833.23.43.63.844.2*5 point scale1=Not at all,2=Slightly,3=Moderately,4=Very,5=ExtremelyOrder of countries sorted by Excited category 2023 The University of Queensland ABN:63 942 912
193、 684 CRICOS Provider No:00025B.2023 KPMG,an Australian partnership and a member firm of the KPMG global organisation of independent member firms affiliated with KPMG International Limited,a private English company limited by guarantee.All rights reserved.The KPMG name and logo are trademarks used un
194、der license by the independent memberfirms of the KPMG global organisation.Liability limited by a scheme approved under Professional Standards Legislation.TRUST IN ARTIFICIAL INTELLIGENCE20Younger generations,the university educated,and managers are more trusting and accepting of AI systems,and more
195、 likely to feel positive emotionsAs shown in Figure 9 and through statistical analyses,younger people,notably Generation Z and Millennials,are more trusting and accepting of AI,than older generations and view AI systems as more trustworthy than older generations.These generational effects held acros
196、s most countries and are particularly pronounced in Australia and the USA.For example,in Australia,25%of older generations trust AI compared to 42%of Gen X and Millennials,and 13%of older generations accept AI compared to 34%of Gen Z and Millennials.In contrast,in South Korea and China,we see a reve
197、rsal of this pattern,with older generations more trusting of AI than younger generations.For example,in South Korea,23%of Gen Z and Millennials are willing to trust AI,compared to 44%of Baby Boomers and older generations.People with a university education are also more trusting and accepting of AI t
198、han those without a university degree and hold more positive views of the trustworthiness of AI.This difference was also particularly evident in Australia,with 42%of university educated Australians willing to trust AI,compared to 27%of Australians without a university education.Managers are also mor
199、e trusting and accepting of AI,and perceive it as more trustworthy than people in other occupations.In addition,younger generations,those with a university education,and managers are more likely to feel positive emotions about AI.There are no generational,educational,or occupational differences in t
200、he experience of negative emotions about AI.It is noteworthy that there are no meaningful differences across men,women and other genders in trust,acceptance,or emotions towards AI,however in a few countries(USA,Singapore,Israel,and South Korea,respectively),men were more trusting or accepting of AI,
201、and reported more positive emotions,than other genders.Figure 9:Trust and acceptance of AI systems by generation and educationManager5448Professional&Skilled4237Administrative and Service/Sales3731Manual3227University education4640No university education3227%Trust%High acceptanceGen Z&Millennials(18
202、-39)4240Gen X(40-55)3731Baby Boomers+(56-91)3322 2023 The University of Queensland ABN:63 942 912 684 CRICOS Provider No:00025B.2023 KPMG,an Australian partnership and a member firm of the KPMG global organisation of independent member firms affiliated with KPMG International Limited,a private Engli
203、sh company limited by guarantee.All rights reserved.The KPMG name and logo are trademarks used under license by the independent memberfirms of the KPMG global organisation.Liability limited by a scheme approved under Professional Standards Legislation.TRUST IN ARTIFICIAL INTELLIGENCE2122TRUST IN ART
204、IFICIAL INTELLIGENCEHow do people perceive the benefits and risks of AI?TOPIC TWO 2023 The University of Queensland ABN:63 942 912 684 CRICOS Provider No:00025B.2023 KPMG,an Australian partnership and a member firm of the KPMG global organisation of independent member firms affiliated with KPMG Inte
205、rnational Limited,a private English company limited by guarantee.All rights reserved.The KPMG name and logo are trademarks used under license by the independent memberfirms of the KPMG global organisation.Liability limited by a scheme approved under Professional Standards Legislation.To answer this
206、question,we asked people about a range of potential benefits and risks associated with AI,the likelihood of risks occurring,as well as whether the benefits outweigh the risks.23TRUST IN ARTIFICIAL INTELLIGENCE 2023 The University of Queensland ABN:63 942 912 684 CRICOS Provider No:00025B.2023 KPMG,a
207、n Australian partnership and a member firm of the KPMG global organisation of independent member firms affiliated with KPMG International Limited,a private English company limited by guarantee.All rights reserved.The KPMG name and logo are trademarks used under license by the independent memberfirms
208、 of the KPMG global organisation.Liability limited by a scheme approved under Professional Standards Legislation.People expect AI will deliver a range of benefits but perceive more process benefits than benefits to peopleMost people(85%)believe the use of AI will result in a range of benefits,as sho
209、wn in Figure 10.People who perceive more benefits from AI are also much more likely to trust in AI systems(r=0.62,p0.001).People have particularly high expectations that AI will improve efficiency,innovation,effectiveness,resource utilisation and reduce costs.People perceive the process benefits of
210、AI,such as improved efficiency and innovation,as greater than the people benefits of AI,such as improving outcomes for people,and enhancing decision-making and what people can do.In many countries,the benefits of using AI in Human Resources,particularly the benefits for people and effectiveness,were
211、 lower than the benefits of using other applications of AI(e.g.Security,Healthcare,Recommender systems and AI in general).Figure 10:The perceived benefits of AI useTo what extent do you expect these potential benefits from the use of AIspecific application?%Low%Moderate%HighOverall benefitsImproved
212、efficiencyInnovationImproved effectivenessReduced costsBetter use of resourcesEnhanced precisionImproved outcomes for peopleEnhancing what people can doEnhanced decision-making153649132760162856183151222751193051223048233245243145233839Low=Not at all or To a small extentModerate=To a moderate extent
213、High=To a large extent or To a very large extent&personalisationPeople in the BICS countries perceive the greatest benefits of AIThere are significant differences between countries in perceptions of AI benefits.As shown in Figure 11,people in the BICS countries have the most positive view of the ben
214、efits of AI(Ms=3.84.0/5).In contrast,people in Australia,Canada,the UK,USA,the Netherlands,Finland,and Japan,were less convinced by the benefits of AI(Ms=3.03.1/4).2023 The University of Queensland ABN:63 942 912 684 CRICOS Provider No:00025B.2023 KPMG,an Australian partnership and a member firm of
215、the KPMG global organisation of independent member firms affiliated with KPMG International Limited,a private English company limited by guarantee.All rights reserved.The KPMG name and logo are trademarks used under license by the independent memberfirms of the KPMG global organisation.Liability lim
216、ited by a scheme approved under Professional Standards Legislation.TRUST IN ARTIFICIAL INTELLIGENCE24Figure 11:The perceived benefits of AI across countriesFigure 12:The perceived risks of AI useEnhanced decision-makingEnhancing what people can doImproved effectivenessReduced costsBetter use of reso
217、urcesImproved outcomes for peopleEnhanced precision or personalisationInnovationImproved efficiencyChinaIndiaSouth AfricaBrazilSouth KoreaIsraelEstoniaGermanySingaporeFranceFinlandJapanUnited KingdomNetherlandsUnited StatesCanadaAustralia2.62.833.23.43.63.844.2*5 point scaleCountries sorted in order
218、 of Improved efficiencyTo what extent do you expect these potential benefits from the use of AI specific application?How concerned are you about these potential risks of AI use specific application?%Low%Moderate%HighOverall RisksCybersecurity risksManipulation or harmful useJob loss due to automatio
219、nLoss of privacySystem failureDeskillingHuman rights being underminedInaccurate outcomePotential for bias272944162460242650232750252847243145273043302941324028423028%Low=Not at all or To a small extent%Moderate=To a moderate extent%High=To a large extent or To a very large extentPeople are concerned
220、 about a range of potential risks from AI use,particularly cybersecurity risksWhile people expect significant benefits from AI,the large majority(73%)also perceive significant potential risks from AI.People who perceive more risks of AI use,are somewhat less trusting of AI systems(r=-0.25,p0.001).Cy
221、bersecurity risk(e.g.from hacking or malware)is the dominant concern raised by 84%of people.Other risks of moderate to very large concern raised by more than two-thirds of people(6877%)include manipulative or harmful use of AI,job loss and deskilling,loss of privacy,system failure,undermining of hum
222、an rights and inaccurate outcomes.In comparison,people are less concerned about the risk of bias from AI use.However,bias is still a concern for the majority of people(58%).This may reflect that the general public perceive AI systems as less biased than humans,or alternatively,are less aware of the
223、potential risk of bias from AI systems.2023 The University of Queensland ABN:63 942 912 684 CRICOS Provider No:00025B.2023 KPMG,an Australian partnership and a member firm of the KPMG global organisation of independent member firms affiliated with KPMG International Limited,a private English company
224、 limited by guarantee.All rights reserved.The KPMG name and logo are trademarks used under license by the independent memberfirms of the KPMG global organisation.Liability limited by a scheme approved under Professional Standards Legislation.TRUST IN ARTIFICIAL INTELLIGENCE25To complement these quan
225、titative findings,we asked people:What concerns you most about the use of AI specific application?Thematic analysis of this open-ended data23 reinforced that people across all countries are concerned about each of the risks shown in Figure 12,including concerns about:privacy breaches,cybersecurity a
226、ttacks and hacking manipulation and harmful use,including misuse by service providers and governments(e.g.to monitor or control)job loss,technological unemployment and deskilling inaccurate outcomes and recommendations,and poor or biased decisions system failure or malfunction causing harm the loss
227、of human control and agency,and loss of human judgement in decision-making,resulting in unintended consequences(including AI taking over).The qualitative data further highlighted concerns around:a lack of transparency of when and how AI is being used,and how AI generates decisions and outcomes a lac
228、k of regulations,policies and governance to make AI use safe and ethical.It is also important to note that some people report having no concerns.People view the risks of AI in a comparable way across countriesIn contrast to the distinct differences across countries in how people view the benefits of
229、 AI,there are many commonalities in how people from different countries perceive the risks of AI use.In almost all countries,people are most concerned about cybersecurity risks.The exceptions are people in India and South Africa,who are most concerned with job loss due to automation,followed by cybe
230、rsecurity risks.This concern about job loss may reflect the recent increase in AI-related activity in these two countries.24While AI acceleration clearly has the potential to provide economic benefits to these countries,it may also result in job losses.In Japan,the top concerns are AI system failure
231、(e.g.where the AI system malfunctions or goes offline)and cybersecurity,which may reflect the heavy dependence on smart technology in Japan.We also see that across all countries,people are least concerned about the potential risk of bias from AI use,followed by inaccurate outcomes from AI use.People
232、 in South Africa,South Korea and Brazil perceive the risks of AI higher than people in most other countries.In contrast,people in Germany perceive the potential risks of AI lower than people in most other countries.Figure 13:The perceived risks of AI across countriesFinlandSouth KoreaSouth AfricaSin
233、gaporeBrazilEstoniaAustraliaFranceIsraelCanadaUSAUKNetherlandsJapanChinaGermanyIndia2.42.62.833.23.43.63.84*5 point scaleCountries sorted by Cybersecurity risks Potential for biasInaccurate outcomeLoss of privacySystem failureHuman rights being underminedManipulation or harmful useDeskillingJob loss
234、 due to automationCybersecurity risksHow concerned are you about these potential risks of AI use?2023 The University of Queensland ABN:63 942 912 684 CRICOS Provider No:00025B.2023 KPMG,an Australian partnership and a member firm of the KPMG global organisation of independent member firms affiliated
235、 with KPMG International Limited,a private English company limited by guarantee.All rights reserved.The KPMG name and logo are trademarks used under license by the independent memberfirms of the KPMG global organisation.Liability limited by a scheme approved under Professional Standards Legislation.
236、TRUST IN ARTIFICIAL INTELLIGENCE26People are divided about the likelihood that AI risks will impact peopleWe asked respondents how likely it is that one or more of these risks would impact people in their country,as well as them personally.As shown in Figure 14(combined score),people were split in t
237、heir views,with 31%believing these risks are likely to impact people,30%believing they are unlikely to impact,and 39%believing that it is as likely as unlikely that one or more risks will impact people.People in western countries and Israel perceive the risks as more likely to impact other people in
238、 their country,than them personally.People in South Korea,India and South Africa are the most likely to believe the risks associated with AI will impact people.In contrast,people in the EU countries of Finland,Estonia,France,and Germany are the least likely to believe these risks will impact people.
239、Figure 14:The likelihood of risks impacting people29%28%43%39%27%34%30%39%31%People in your countryYou personallyCombined%Unlikely%Equally likely as unlikely%LikelyWithin the next 10 years,how likely is it that one or more of these risks will impact.?%Unlikely=Somewhat unlikely,Unlikely or Very unli
240、kely%Equally likely as unlikely=Equally likely as unlikely%Likely=Somewhat likely,Likely,or Very likely 2023 The University of Queensland ABN:63 942 912 684 CRICOS Provider No:00025B.2023 KPMG,an Australian partnership and a member firm of the KPMG global organisation of independent member firms aff
241、iliated with KPMG International Limited,a private English company limited by guarantee.All rights reserved.The KPMG name and logo are trademarks used under license by the independent memberfirms of the KPMG global organisation.Liability limited by a scheme approved under Professional Standards Legis
242、lation.TRUST IN ARTIFICIAL INTELLIGENCE27The university educated,younger generations,and managers perceive more benefits of AI,but there are no demographic differences in perceptions of riskYounger generations,namely Gen Z and Millennials,view the benefits of AI more positively than people of the Ba
243、by Boomer generation or older(55%vs 37%high).People with a university education also view the benefits of AI more positively than people without a degree(56%vs 41%high)and are more likely to view the benefits of AI as outweighing the risks(51%vs 38%agree)and believe that one or more of the risks ass
244、ociated with AI will impact them personally(38%vs 29%).Managers are also more likely to perceive benefits associated with AI than those with other occupations(62%vs 4351%high),and more likely to believe the benefits of AI outweigh the risks(58%vs 3647%agree).There are no differences between men,wome
245、n,and other genders in the perceived benefits of AI,and no differences in the perceived risks across generation,education or occupational groupings.People are more likely to believe the benefits of AI outweigh the risks in the BICS countries and Singapore:people in western countries are more circums
246、pect There are large country differences in how people perceive the AI benefit-risk trade-off.As shown in Figure 15,most people in the BICS countries,Singapore and Israel(5381%)agree the benefits of AI outweigh the risks.In contrast,people in the western countries,Japan and South Korea are less conv
247、inced and more ambivalent,with only 4048%agreeing the benefits outweigh the risks.Figure 15:Perceptions across countries that AI benefits outweigh risks%AgreeWhole sampleChinaBrazilIndiaSingaporeSouth AfricaIsraelSouth KoreaAustraliaFinlandCanadaGermanyEstoniaJapanUnited StatesUnited KingdomFranceNe
248、therlands508171695958534844444242424241404040%Agree=Strongly agree,Agree,and Somewhat agreeThinking about people in your country generally,to what extent do you agree the benefits of AI specific application outweigh the risks?2 itemsOne in two people believe the benefits of AI outweigh the risks We
249、asked people whether the benefits of AI outweigh the risks,both in relation to people in their country,and to themselves personally.In both cases half agree that the benefits of AI outweigh the risks,and under a quarter(2124%)disagreed.The remainder(2629%)are neutral.In several countries,people were
250、 less likely to believe the benefits of AI use in the Human Resources application outweigh the risks.2023 The University of Queensland ABN:63 942 912 684 CRICOS Provider No:00025B.2023 KPMG,an Australian partnership and a member firm of the KPMG global organisation of independent member firms affili
251、ated with KPMG International Limited,a private English company limited by guarantee.All rights reserved.The KPMG name and logo are trademarks used under license by the independent memberfirms of the KPMG global organisation.Liability limited by a scheme approved under Professional Standards Legislat
252、ion.TRUST IN ARTIFICIAL INTELLIGENCE2829TRUST IN ARTIFICIAL INTELLIGENCEWho is trusted to develop,use and govern AI?TOPIC THREE 2023 The University of Queensland ABN:63 942 912 684 CRICOS Provider No:00025B.2023 KPMG,an Australian partnership and a member firm of the KPMG global organisation of inde
253、pendent member firms affiliated with KPMG International Limited,a private English company limited by guarantee.All rights reserved.The KPMG name and logo are trademarks used under license by the independent memberfirms of the KPMG global organisation.Liability limited by a scheme approved under Prof
254、essional Standards Legislation.Given the risks and benefits associated with AI,we asked people who they trusted to develop and govern AI.Specifically,we asked how much confidence people have in a variety of entities to develop and use AI,as well as regulate and govern it.We first explore the insight
255、s for the total sample,and then examine country differences.30TRUST IN ARTIFICIAL INTELLIGENCE 2023 The University of Queensland ABN:63 942 912 684 CRICOS Provider No:00025B.2023 KPMG,an Australian partnership and a member firm of the KPMG global organisation of independent member firms affiliated w
256、ith KPMG International Limited,a private English company limited by guarantee.All rights reserved.The KPMG name and logo are trademarks used under license by the independent memberfirms of the KPMG global organisation.Liability limited by a scheme approved under Professional Standards Legislation.Pe
257、ople are most confident in universities and defence organisations to develop,use and govern AI in the best interests of the public As shown in Figure 16,people have the most confidence in their national universities and research institutions,national defence forces,and international research organis
258、ations to develop and use AI in the best interests of the public,with between 7782%reporting moderate to complete confidence(Ms=3.43.5/5).Seventy-one percent report feeling confident in technology companies to develop and use AI(M=3.2/5).People have the least confidence in government and commercial
259、organisations(63%each,Ms=2.93.0),with a third of people reporting no or low confidence in government and commercial organisations to develop and use AI.A solution may be for these organisations to collaborate in AI development with more trusted entities,such as universities and research institutions
260、.There is a similar pattern regarding confidence in entities to regulate and govern AI in the best interest of the public(see Figure 17).25 People are more confident in national universities,international research organisations,as well as security and defence organisations(7679%confidence,Ms 3.4/5 e
261、ach)to regulate and govern AI,than other entities.People reported the least confidence in governments,technology,and commercial organisations(6066%,Ms=2.93.0).About a third of people report no or low confidence in these entities to develop and regulate AI(see Figure 17).When people are confident in
262、entities to develop and govern AI,they are more likely to trust in AI systems(correlations ranging from 0.42 defence forces to 0.54 technology companies,p0.001).Figure 16:Confidence in entities to develop and use AI How much confidence do you have in the following to develop and use AI in the best i
263、nterests ofthe public?%Dont know%No or low confidence%Moderate confidence%High or complete confidenceNational universitiesSecurity and defence forcesInternational research organisationsTechnology companiesGovernmentCommercial organisations153250419284971532462633384333033343528*Research institutions
264、,defence forces,and government were country specific 2023 The University of Queensland ABN:63 942 912 684 CRICOS Provider No:00025B.2023 KPMG,an Australian partnership and a member firm of the KPMG global organisation of independent member firms affiliated with KPMG International Limited,a private E
265、nglish company limited by guarantee.All rights reserved.The KPMG name and logo are trademarks used under license by the independent memberfirms of the KPMG global organisation.Liability limited by a scheme approved under Professional Standards Legislation.TRUST IN ARTIFICIAL INTELLIGENCE31Countries
266、vary in their confidence in entities to develop,use and govern AI There is significant variation across countries on peoples confidence in entities to develop,use and govern AI,particularly confidence in government and technology firms(see Figure 18).A lack of confidence in government to develop,use
267、 and govern AI in the best interests of the public was reported by about half the people in South Africa(52%),the USA(49%),Japan(47%),and the UK(45%).In contrast,many people in China(86%),India(70%),and Singapore(60%)have high or complete confidence in their governments to develop,use and govern AI.
268、While confidence in technology companies to develop,use and govern AI is generally low in western countries and Israel,particularly Finland,Canada,and Australia(3040%no or low confidence,M=2.72.8),it is comparatively high in the BICS countries(5173%high or complete confidence,M=3.74.2).Figure 17:Con
269、fidence in entities to regulate and govern AI Figure 18:Confidence in technology and government entities to develop,use and govern AI Figure 17.Confidence in entitites to regulate and govern AIHow much confidence do you have in each of the following to regulate or govern AI in the best interests of
270、the public?%Dont know%No or low confidence%Moderate confidence%High or complete confidenceNational universitiesInternational research organisationsSecurity and defence forcesInternational organisations(e.g.ISO,UN)A partnership or association of tech companies,academics,and civil society groupsExisti
271、ng agencies that Technology companiesGovernmentCommercial organisations173247717314520294762032426193540253636313234333034363426*Research institutions,defence forces,existing agencies that govern specific sectors,and government were country specificregulate or govern specific sectorsConfidence in go
272、vernmentConfidence in technology companies4.24.13.84.43.82.53.73.03.43.83.22.92.92.42.82.92.82.52.82.72.82.82.82.82.82.52.72.92.72.82.72.8*Mean of 5 point scale amalgamating confidence to develop and use AI with confidence to regulate and govern AI ChinaIndiaBrazilSingaporeSouth KoreaEstoniaSouth Af
273、ricaUKIsraelJapan2.73.0FinlandGermanyFranceNetherlandsUSAAustraliaCanada 2023 The University of Queensland ABN:63 942 912 684 CRICOS Provider No:00025B.2023 KPMG,an Australian partnership and a member firm of the KPMG global organisation of independent member firms affiliated with KPMG International
274、 Limited,a private English company limited by guarantee.All rights reserved.The KPMG name and logo are trademarks used under license by the independent memberfirms of the KPMG global organisation.Liability limited by a scheme approved under Professional Standards Legislation.TRUST IN ARTIFICIAL INTE
275、LLIGENCE32General trust partly explains confidence in entities to use and govern AIThese country differences in confidence of entities to use and govern AI can be partly explained by generalised trust towards these entities.General trust in an entity sets a foundation that influences domain specific
276、 trust in the entity to perform particular actions.There are very high correlations between general trust in government(to do the right thing and act competently)and confidence in government to develop/use(r=0.80,p0.001),and regulate/govern AI(r=0.82,p0.001).The countries where people are most confi
277、dent in their government to develop,use and govern AI,namely China,India,and Singapore,are also the countries with higher general trust in their governments(6082%high trust,Ms 4.95.6/7).Similarly,the countries where people have the least confidence in government to develop,use and regulate AI,also h
278、ave low general trust in government,namely South Africa,UK,USA,and Japan(5365%low trust,Ms=2.73.1).We also find high correlations(ranging between 0.55 and 0.70,p0.001)between general trust in universities and research institutions,security forces,and business with confidence in each of these entitie
279、s to develop,use and govern AI.Younger generations,the university educated,and managers are more confident in entities to develop,use and govern AIGeneration X and Millennials are more confident in all entities,except government,to develop and regulate AI in the best interest of the public than Baby
280、 Boomers and older generations(42%vs 28%high confidence).People with a university education are more confident in some entities to develop and regulate AI,particularly government(38%vs 23%high confidence),defence forces(50%vs 40%high confidence),and international research and scientific organisation
281、s(49%vs 38%high confidence).Managers are more confident in government(44%high confidence vs 2033%),commercial organisations(35%vs 1923%),and technology companies(44%vs 2531%)to develop and regulate AI than all other occupation groups.2023 The University of Queensland ABN:63 942 912 684 CRICOS Provid
282、er No:00025B.2023 KPMG,an Australian partnership and a member firm of the KPMG global organisation of independent member firms affiliated with KPMG International Limited,a private English company limited by guarantee.All rights reserved.The KPMG name and logo are trademarks used under license by the
283、 independent memberfirms of the KPMG global organisation.Liability limited by a scheme approved under Professional Standards Legislation.TRUST IN ARTIFICIAL INTELLIGENCE3334TRUST IN ARTIFICIAL INTELLIGENCEWhat do people expect of the management,governance and regulation of AI?TOPIC FOUR 2023 The Uni
284、versity of Queensland ABN:63 942 912 684 CRICOS Provider No:00025B.2023 KPMG,an Australian partnership and a member firm of the KPMG global organisation of independent member firms affiliated with KPMG International Limited,a private English company limited by guarantee.All rights reserved.The KPMG
285、name and logo are trademarks used under license by the independent memberfirms of the KPMG global organisation.Liability limited by a scheme approved under Professional Standards Legislation.We asked people about their expectations around AI management,governance and regulation,including the extent
286、to which they think regulation is necessary,who should regulate,and whether current regulations and institutional safeguards are sufficient.We also asked what development and governance principles and practices are important for people to trust AI systems.To contextualise these expectations,we first
287、 asked people their views about the impact of AI on society.35TRUST IN ARTIFICIAL INTELLIGENCE 2023 The University of Queensland ABN:63 942 912 684 CRICOS Provider No:00025B.2023 KPMG,an Australian partnership and a member firm of the KPMG global organisation of independent member firms affiliated w
288、ith KPMG International Limited,a private English company limited by guarantee.All rights reserved.The KPMG name and logo are trademarks used under license by the independent memberfirms of the KPMG global organisation.Liability limited by a scheme approved under Professional Standards Legislation.Si
289、xty-one percent believe the societal impact of AI is uncertainMost people(61%)believe the long-term impact of AI on society is uncertain and unpredictable(see Figure 19).The more uncertainty people perceive,the less likely they are to trust AI systems(r=-0.25,p0.001).While the majority of people in
290、almost all countries agree the societal impact of AI is uncertain,people in the western countries of the USA,Australia,the UK and Canada perceive the greatest uncertainty(7072%).In contrast,those in South Korea,Japan,Israel and Brazil perceive the least uncertainty(4355%).Most people believe AI regu
291、lation is required and expect some form of external,independent oversightGiven the perceived uncertain impact of AI on society,it is not surprising that most people across countries(71%)believe AI regulation is required.Less than one in five people(17%)believe AI regulation is not needed,with the re
292、maining 12%unsure.This finding corroborates prior surveys indicating strong desire for the regulation of AI.26 People are broadly supportive of multiple forms of regulation.As shown in Figure 20,the majority of people(6470%)expect a range of entities to be involved in regulating AI,including governm
293、ent and/or existing regulators,industry that uses or develops AI,co-regulation by industry,government,and existing regulators,and a dedicated,independent AI regulator.Figure 19:Perception that the impact of AI on society is uncertainFigure 20:Expectations of who should regulate AITo what extent do y
294、ou agree with the following statements?(1)The impact of AI specific application is unpredictable(2)The long-term impact of AI specific application on society is uncertain(3)There is a lot of uncertainty around AI specific application%Disagree%Neutral%AgreeWhole sampleUnited StatesAustraliaUnited Kin
295、gdomCanadaSingaporeFinlandEstoniaNetherlandsChinaSouth AfricaFranceIndiaGermanyBrazilIsraelJapanSouth Korea152461919721019711019719217010216992467112465112663142363182062132859212158192556242155212653163252223543%Disagree%Neutral%AgreeCo-regulationThe government and/orexisting regulatorsA dedicated,
296、independentAI regulatorIndustry that uses or develops AIAI regulation is not needed111970151867122167162064711217%Disagree=Somewhat disagree,Disagree,or Strongly disagree%Neutral=Neutral%Agree=Somewhat agree,Agree,or Strongly agree I think AI specific application should be regulated by.2023 The Univ
297、ersity of Queensland ABN:63 942 912 684 CRICOS Provider No:00025B.2023 KPMG,an Australian partnership and a member firm of the KPMG global organisation of independent member firms affiliated with KPMG International Limited,a private English company limited by guarantee.All rights reserved.The KPMG n
298、ame and logo are trademarks used under license by the independent memberfirms of the KPMG global organisation.Liability limited by a scheme approved under Professional Standards Legislation.TRUST IN ARTIFICIAL INTELLIGENCE36Countries vary in expectations of who should regulate In many countries,peop
299、le express a preference for some form of independent regulation over regulation by industry(see Figure 21).For example:Australians prefer AI to be regulated by government and existing regulators,or by an independent AI body,rather than by industry(M=5.3 vs 4.6).An independent regulator is endorsed a
300、s a better option than industry regulation in the UK(M=5.4 vs 4.6),Germany(M=4.9 vs 4.5),and Finland(M=4.9 vs 4.4).Co-regulation is a preferred option compared to regulation by industry in the UK(M=5.1 vs 4.6),Canada(M=4.9 vs 4.5),Finland(M=4.9 vs 4.4),Israel(M=5.1 vs 4.5)and China(M=5.7 vs 5.3).In
301、contrast,in South Africa,all forms of regulation are seen as preferable compared to regulation by government(Ms=5.05.4 vs 4.5).As shown in Figure 21(black dots),people in India,China and Singapore are more likely to see AI regulation as unnecessary,compared to people in other countries.Specifically,
302、a quarter or more of Singaporeans(25%),Chinese(37%)and Indians(39%)view AI regulation as not needed.However,except for India,most people in all other countries believe AI regulation is required,ranging from 56%in China to 83%in Israel.Figure 21:Expectations of who should regulate AI across countries
303、I think AI specific application should be regulated byThe government and/or existing regulatorsIndustry that uses or develops AICo-regulationA dedicated,independent AI regulatorAI regulation is not needed IsraelNetherlandsEstoniaJapanFinlandUKSouth KoreaCanadaSouth AfricaFranceAustraliaUSAGermanyBra
304、zilSingaporeChinaIndia1.522.533.544.555.56*7 point scaleCountries sorted in order of AI regulation is not needed 2023 The University of Queensland ABN:63 942 912 684 CRICOS Provider No:00025B.2023 KPMG,an Australian partnership and a member firm of the KPMG global organisation of independent member
305、firms affiliated with KPMG International Limited,a private English company limited by guarantee.All rights reserved.The KPMG name and logo are trademarks used under license by the independent memberfirms of the KPMG global organisation.Liability limited by a scheme approved under Professional Standa
306、rds Legislation.TRUST IN ARTIFICIAL INTELLIGENCE37Only two in five people believe current safeguards are sufficient to make AI use safeThe majority(61%)of people disagree or are unsure that current safeguards around AI(i.e.rules,regulations,and laws)are sufficient to make the use of AI safe and prot
307、ect them from problems(see Figure 22).This pattern was strongest in the western countries together with Israel,Japan,and South Korea.Only 39%of people believe that there are sufficient structural assurances around AI use.This finding corroborates previous surveys27 reporting people do not think curr
308、ent rules are effective in regulating AI and is problematic given the strong relationship between current safeguards and trust in AI(r=0.66,p0.001).However,there are stark country differences.Most people in India(80%)and China(74%)believe appropriate safeguards are already in place,more than people
309、in any other country.About half of people in Singapore(53%)and Brazil(52%)also believe current safeguards are sufficient.Conversely,less than one in five people in Japan(13%)and South Korea(17%)agree,with people in these countries rating current safeguards lower than all other countries.The younger
310、generations of Gen Z and Millennials(M=4.3)and Gen X(M=4.0)are more likely to believe there are sufficient safeguards in place to govern AI,compared to people in the Baby Boomers generation or older(M=3.8).Managers are also more likely to perceive sufficient safeguards than other occupations(M=4.6 v
311、s 4.04.1)Figure 22:Perception of current regulations,laws,and rules to make AI use safe To what extent do you agree with the following.(1)There are enough current safeguards to make me feel comfortable with theuse of AI specific application(2)I feel assured that there are sufficient governance proce
312、sses in place toprotect me from problems that may arise from the use of AI(3)The current law helps me feel that the use of AI specific application is safe(4)I feel confident that there is adequate regulation of AI specific application%Disagree%Neutral%AgreeWhole sampleIndiaChinaSingaporeBrazilSouth
313、AfricaGermanyEstoniaIsraelAustraliaNetherlandsUSAUKFranceCanadaFinlandSouth KoreaJapan3229396148052174173053272152263044293239293338323137392635392932432730373330393229423028383725513217513613%Disagree=Somewhat disagree,Disagree,or Strongly disagree%Neutral=Neutral%Agree=Somewhat agree,Agree,or Stro
314、ngly agreeFigure 22.Perceptions of current regulations,laws,and rules to make AI use safespecific application 2023 The University of Queensland ABN:63 942 912 684 CRICOS Provider No:00025B.2023 KPMG,an Australian partnership and a member firm of the KPMG global organisation of independent member fir
315、ms affiliated with KPMG International Limited,a private English company limited by guarantee.All rights reserved.The KPMG name and logo are trademarks used under license by the independent memberfirms of the KPMG global organisation.Liability limited by a scheme approved under Professional Standards
316、 Legislation.TRUST IN ARTIFICIAL INTELLIGENCE38Assurance mechanisms enhance trust in AI systemsIn addition to external rules,laws,and safeguards,we also asked people whether a range of assurance mechanisms available to organisations would influence their trust.Three out of four people(75%)report the
317、y would be more willing to trust an AI system when assurance mechanisms are in place that support ethical and responsible use.These mechanisms include monitoring system accuracy and reliability,using an AI code of conduct,oversight by an independent AI ethical review board,adhering to standards for
318、explainable and transparent AI,and an AI ethics certification(see Figure 23).These mechanisms increase perceptions of safeguards and reduce uncertainty.Of the specific assurance mechanisms,four out of five people(80%)agree that system accuracy and reliability monitoring would enhance their trust,wit
319、h fewer,but still two thirds(68%),agreeing that adherence to an AI ethics certification would enhance trust.The assurance mechanisms influence trust most strongly in the BICS countries and Singapore,and the least in Japan.Figure 23:AI assurance mechanismsI would be more willing to trust AI specific
320、application if%Disagree%Neutral%AgreeThe accuracy and reliability of the system was monitoredThe organisation using the system had an AI ethics code of conductThe system was reviewed by an AI ethics boardIt adhered to standards for explainable and transparent AIIt had an AI ethics certification71380
321、1017739187391873122068Assurances composite81775 2023 The University of Queensland ABN:63 942 912 684 CRICOS Provider No:00025B.2023 KPMG,an Australian partnership and a member firm of the KPMG global organisation of independent member firms affiliated with KPMG International Limited,a private Englis
322、h company limited by guarantee.All rights reserved.The KPMG name and logo are trademarks used under license by the independent memberfirms of the KPMG global organisation.Liability limited by a scheme approved under Professional Standards Legislation.TRUST IN ARTIFICIAL INTELLIGENCE39There is strong
323、 global endorsement of the Trustworthy AI management and governance principles:each principle is important for trust globallyA proliferation of reports and guidance documents on the development and deployment of trustworthy,ethical AI have been produced,with considerable consensus emerging on these
324、principles.28 One goal of this survey was to determine the extent to which these principles are important for people to trust in AI across the globe.To answer this question,we asked about the importance of 16 practices reflecting the principles for trustworthy AI shown in Table 1.These principles pr
325、imarily reflect the Principles for Trustworthy AI adopted by the European Union.29Table 1:Principles and Practices for Trustworthy AITechnical performance,accuracy and robustnessThe performance and accuracy of AI system output is assessed before and regularly during deployment to ensure it operates
326、as intended.The robustness of output is tested in a range of situations,and only data of appropriate quality is used to develop AI.Transparency and explainabilityThe purpose of the AI system,how it functions and arrives at its solutions,and how data is used and managed is transparently explained and
327、 reasonably understandable to a variety of stakeholders.Developers keep an audit trail of the method and datasets used to develop AI.Data privacy,security and governanceSafety and privacy measures are designed into the AI system.Data used for AI is kept secure,used only for the specific purpose to w
328、hich it is agreed,and is not shared with other apps or third parties without permission.Robust security measures are in place to identify and prevent adversarial attacks.Fairness,non-discrimination and diversityThe outcomes of AI systems are assessed regularly to ensure they are fair,free of unfair
329、bias,and designed to be inclusive to a diversity of users.AI is developed with the participation and input of a diverse range of people.Human agency and oversight There is appropriate human oversight and control of AI systems and their impact on stakeholders by people with required expertise and res
330、ources to do so.AI systems are regularly reviewed to ensure they are operating in a trustworthy and ethical manner.Accountability and contestabilityThere is clear accountability and responsibility if something goes wrong with an AI system.Any impacted user or stakeholder is able to challenge the out
331、comes of an AI system via a fair and accessible human review process.AI literacyPeople are supported in understanding AI systems,including when it is appropriate to use them,and the ethical considerations of their use.Risk and impact mitigationThe risks,unintended consequences and potential for harm
332、 from an AI system are fully assessed and mitigated prior to and during its deployment.2023 The University of Queensland ABN:63 942 912 684 CRICOS Provider No:00025B.2023 KPMG,an Australian partnership and a member firm of the KPMG global organisation of independent member firms affiliated with KPMG
333、 International Limited,a private English company limited by guarantee.All rights reserved.The KPMG name and logo are trademarks used under license by the independent memberfirms of the KPMG global organisation.Liability limited by a scheme approved under Professional Standards Legislation.TRUST IN ARTIFICIAL INTELLIGENCE40These principles were endorsed globally,with almost all people(96-99%)across