亞太經合組織(APEC):人工智能在經濟決策中的應用(2022)(英文版)(11頁).pdf

編號:107811 PDF  DOCX  中文版 11頁 626.51KB 下載積分:VIP專享
下載報告請您先登錄!

亞太經合組織(APEC):人工智能在經濟決策中的應用(2022)(英文版)(11頁).pdf

1、 APEC Policy Support Unit POLICY BRIEF No.52 November 2022 Artificial Intelligence in Economic Policymaking By Andre Wirjo,Sylwyn Calizo Jr.,Glacer Nio Vasquez,and Emmanuel A.San Andres “It seems probable that once the machine thinking method had started,it would not take long to outstrip our feeble

2、 powers.There would be no question of the machines dying,and they would be able to converse with each other to sharpen their wits.At some stage therefore,we should have to expect the machines to take control.”Alan Turing,1951 The above was not meant to be scary at all.Indeed,it was from Alan Turings

3、 1951 lecture entitled Intelligent Machinery,A Heretical Theory,where he set out a qualitative proof that it is possible for machines to think like humans.The words,which ended the lecture,were uttered not as a warning to humankind but as a logical and necessary consequence of the proof.Machines sur

4、passing humans and taking over is not the result of some sinister plan but the logical outcome of an ever improving and ultra-efficient artificial intelligence(AI).That inevitable progression is what should concern us.AI is the development of computer systems and models that can perform tasks normal

5、ly requiring human intelligence,such as understanding communication,perceiving a situation,or making a decision.Whether they are deep learning or neural KEY MESSAGES Artificial intelligence(AI)refers to systems and models that can perform tasks requiring human intelligence.What distinguishes AI is i

6、ts capacity for autonomous learning.It could take in the data fed to it and teach itself to,for example,solve mathematical conjectures or to understand native human speech.AI is a powerful tool for policymaking and policy implementation,allowing for efficiency enhancements,improvements in quality of

7、 public services,and time savings on administrative tasks.AI has applications across the various stages of the policy cycle,from agenda setting to policy formulation,decision making,implementation,and evaluation.While AI can be immensely powerful in data analysis and logic,it fares less well on poli

8、cy-relevant concepts such as fairness,justice and equity,which are inherently human.The ability of AI to make sense of human reality,including understanding causality and cultural nuances,remains inadequate.Who develops the AI and how it is developed also pose risks because human factors such as bia

9、ses,prejudices or experience can influence AI algorithms and models and,ultimately,the results generated.Furthermore,data,which serve as the lifeblood fuelling AI solutions,can be vulnerable to infrastructure limitations,structural biases and ethical concerns.AI is already being deployed in policyma

10、king to accomplish specific tasks or analyse large volumes of data.As the technology improves,adoption of AI will increase,and even accelerate.As such,it is imperative to promote its responsible use and to foster the supportive conditions to ensure that it remains a tool for improving human and soci

11、al welfare.These include:(1)establishing AI governance frameworks,(2)enhancing digital ecosystems,(3)building trust on AI adoption and use,(4)promoting partnerships and collaborations,and(5)leveraging regional cooperation.2 networks,or done using binary or quantum computing,ultimately AI is a tool c

12、reated to augment human capabilities and improve social welfare just like the pulley,the steam engine or the computer.However,unlike the usual machines that need human intervention to operate or improve,AI holds the capacity for autonomous improvement and learning.While the most advanced supercomput

13、er needs a human programmer to do anything from simple sums to climate change models,AI can teach itself to solve mathematical conjectures or to understand native human speech,with all its nuances and cultural specifics,well enough to win at Jeopardy.1,2 With time and increases in computing power,on

14、e could foresee AI teaching itself to make policy decisions too exactly what Turing,acknowledged as the father of computing and AI,predicted seven decades ago.But Turings prediction does not need to happen.This policy brief explores how human policymakers can still get ahead of AI and ensure that it

15、 remains a tool for the greater good.Section 1 shows how AI is already being used in policymaking and points to its potential for beneficial use in the future.Section 2 follows with a discussion of the limitations and risks of using AI in policymaking,and Section 3 concludes with some policy options

16、 and opportunities for regional cooperation to ensure that the AI-enabled future remains human-centric.1.A Powerful Tool for Good AI offers many benefits to policymaking and policy implementation through efficiency enhancements,public service quality improvements as well as time savings on administr

17、ative tasks.3 AI can be used as a tool to enable policymakers to formulate more effective policies,make better decisions,and improve communication and engagement with stakeholders.4 At each stage of the policy cycle,from agenda setting to policy formulation,decision making,implementation,and evaluat

18、ion(Figure 1),AI could potentially assist policymakers in generating high-value inputs and creating more meaningful impacts for society.5 1 A.Davies et al.,“Advancing Mathematics by Guiding Human Intuition with AI,”Nature 600(2021):7074,https:/doi.org/10.1038/s41586-021-04086-x;2 E.Guizzo,“IBMs Wats

19、on Jeopardy Computer Shuts Down Humans in Final Game Silicon Prevails in Men vs.Machine Challenge,”17 February 2011,https:/spectrum.ieee.org/ibm-watson-jeopardy-computer-shuts-down-humans 3 Organisation for Economic Co-operation and Development(OECD),Artificial Intelligence in Society(Paris:OECD Pub

20、lishing,2019),https:/doi.org/10.1787/eedfee77-en 4 J.Berryhill et al.,“Hello,World:Artificial Intelligence and Its Use in the Public Sector,”working paper,OECD Publishing,Paris,2019,https:/doi.org/10.1787/726fd39d-en Figure 1:The policy cycle Source:Adapted from M.Howlett and S.Giest,“Policy Cycle,”

21、International Encyclopedia of the Social&Behavioral Sciences,2nd edn(Elsevier,2015),https:/doi.org/10.1016/B978-0-08-097086-8.75031-8 1.1.Agenda setting At the first stage of the policy cycle,AI could support policymakers in determining the most critical challenges affecting their constituents,throu

22、gh analyses of large datasets and crowdsourced data.The patterns revealed by such AI analyses could guide and inform policymakers in setting agenda priorities.In Australia,for example,the Department of Health and Human Services of the Victoria State Government used advanced text analysis of anonymis

23、ed historical triage data to detect unusual patterns of illnesses and identify public health risks.6 The surveillance effort,aided by machine learning,helped public health officials in providing early warnings of potentially harmful illnesses to the public.Likewise,AI algorithms have been developed

24、by public health experts in France to estimate the incidence of diabetes mellitus,7 5 J.Patel et al.,“AI Brings Science to the Art of Policymaking,”BCG,5 April 2021,https:/ 6 State Government of Victoria,“VCDI Case Studies:Early Warning of Public Health Risks,”reviewed 13 February 2020,https:/www.vi

25、c.gov.au/victorian-centre-data-insights-strategy/vcdi-case-studies#embedding-modern-ways-of-working-at-det 7 R.Haneef et al.,“Use of Artificial Intelligence for Public Health Surveillance:A Case Study to Develop a Machine Learning-Algorithm to Estimate the Incidence of Diabetes Mellitus in France,”A

26、rchives of Public Health 79(2021):168,https:/doi.org/10.1186/s13690-021-00687-0 Identify issues and problems Develop policy options Choose preferred policy E ecute and administer policy Monitor and assess impacts3 providing essential data for public health surveillance.Machine learning algorithms ha

27、ve also been used to analyse crowdsourced datasets.Natural language processing techniques have been utilised in Belgium to help civil servants process high volumes of data from stakeholder engagement platforms.8 The AI technology classifies and analyses data from real-time dashboards to detect trend

28、s and uncover insights from these patterns that can be disaggregated across demographic groups and geographic locations.Similarly,in Bulgaria,territorial distributions of signals and data are being analysed to detect behavioural trends and identify issues of concern in urban areas.9 1.2.Formulation

29、AI could also directly contribute to policy formulation by providing evidence-based insights.At this stage of the policy cycle,the predictive power of AI is a useful and powerful tool that can help in estimating the likely impacts of economic policies,projecting the costs and benefits of policy opti

30、ons and properly identifying the target population.For example,AI and two-level deep reinforcement learning have been used to assess the impact of tax policy designs.10 That AI framework,named the AI Economist,has been found to be effective and viable in formulating economic policies.Recent developm

31、ents have also shown that properly trained machine learning models can rapidly and accurately forecast the impacts of green spending,which aids in crafting more effective fiscal policies.11 AI could also contribute to formulating trade policies.For example,the United Nations Conference on Trade and

32、Development(UNCTAD)has developed artificial neural networks that predict 8 Observatory of Public Sector Information(OPSI),“Unlocking the Potential of Crowdsourcing for Public Decision-making with Artificial Intelligence,”12 April 2018,https:/oecd-opsi.org/innovations/unlocking-the-potential-of-crowd

33、sourcing-for-public-decision-making-with-artificial-intelligence/9 Policy Cloud,“Urban Policy Making through Analysis of Crowdsourced Data,”accessed 20 October 2022,https:/policycloud.eu/pilots/urban-policy-making-through-analysis-crowdsourced-data 10 S.Zheng et al.,“The AI Economist:Ta ation Policy

34、 Design via Two-level Deep Multiagent Reinforcement Learning,”Science Advances 8,no.18(4 May 2022),https:/doi.org/10.1126/sciadv.abk2607 11 R.Maia,H.Sharma,and D.Hopp,“Using Machine Learning to Make Government Spending Greener,”8 October 2021,https:/greenfiscalpolicy.org/blog/using-machine-learning-

35、to-make-government-spending-greener/#_ftn9 the impact of trade policies on global trade flows.12 The nowcasting predictions of global merchandise export values and volumes and of global services exports have been shown to be superior to traditional econometric forecasting models.The ability to accom

36、modate several input features and various time frequencies,seen in this model,is another advantage conferred by artificial neural networks.AI can offer advanced data analysis by processing both traditional and innovative data sources and helping to ensure inclusive policy interventions are better ta

37、rgeted.For example,the Asian Development Bank(ADB)has used computer vision techniques on satellite imagery to predict and map poverty at a granular level in the Philippines and Thailand,thus providing policymakers with timely poverty data even between household survey cycles.13 The use of AI tools h

38、as also been explored in Quebec,Canada for assessing the well-being of diverse communities,allowing a more targeted approach in formulating policy interventions.14 1.3.Decision making The third stage of the policy cycle involves the decision-making process in adopting policy interventions.At the pol

39、icy formulation stage,AI could serve as a simulator to test and forecast the potential impacts of economic policies;in contrast,at the decision-making stage,AI could be a tool to improve the quality and speed of the daily decision-making process in legislative bodies.For example,AI can help provide

40、solutions to issues related to congressional and committee scheduling and in creating optimal models to improve the planning of hearings and the scheduling of votes,which allows the decision-making process in the legislature to be more efficient.15 Members of legislative bodies can 12 D.Hopp,“Econom

41、ic Nowcasting with Long Short-term Memory Artificial Neural Networks(LSTM),”United Nations Conference on Trade and Development(UNCTAD)Research Paper 62,2021,https:/unctad.org/webflyer/economic-nowcasting-long-short-term-memory-artificial-neural-networks-lstm 13 M.Hofer et al.,“Applying Artificial In

42、telligence on Satellite Imagery to Compile Granular Poverty Statistics,”Asian Development Bank(ADB)Economics Working Paper 629,2020,https:/www.adb.org/publications/artificial-intelligence-satellite-imagery-poverty-statistics 14 E.Rowe et al.,“Harnessing Artificial Intelligence to Measure the Well-Be

43、ing of Quebecs Diverse Regions,”Ma Bell School of Public Policy,28 July 2021,https:/www.mcgill.ca/maxbellschool/article/articles-policy-lab-2021/harnessing-artificial-intelligence-measure-well-being-quebecs-diverse-regions 15 E.Graham,“AI Could Help Congress Schedule and Find Une pected Consensus,E

44、pert Says,”28 July 2022,https:/ also benefit from the use of natural language processing technologies to analyse bills,amendments and laws,and of AI chatbots to ask questions about the status of bills,resolutions and the oversight procedure,which helps policymakers to speed up the decision-making pr

45、ocess.16 Understanding the application of AI at this step is especially important given that an estimated 60 percent of investments in government AI will have a direct impact on real-time operational decisions by 2024.17 AI can also directly provide inputs and suggestions to policymakers in real tim

46、e.For example,in China,machine learning algorithms built by the Chinese Academy of Sciences have been used in providing inputs and offering recommendations on foreign policy to policymakers.18 An advantage of these AI-based judgments is that decisions can be based on timely and accurate data.1.4.Imp

47、lementation Policymakers can benefit from advanced AI systems and hardware in executing policies.Through automation,quick data processing and real-time analysis,the use of AI can lead to improvements in the quality,speed and efficiency of the delivery and implementation of policies.In terms of imple

48、menting transport-related policies,the US city of Pittsburgh,Pennsylvania utilised AI technology in its traffic systems to reduce travel time.19 The technology detects cars through its radar devices,monitors traffic flows,creates AI models based on the gathered data and generates a real-time signal

49、timing plan.This then enables traffic lights to adapt to certain traffic conditions instead of using pre-programmed traffic-light cycles.The implementation of the fully automated and adaptive AI system was successful in reducing idling by more than 40 percent,braking by about 30 percent and travel t

50、ime by about 25 percent.In the US city of New Orleans,Louisiana,AI was employed to improve the implementation of its congress-schedule-and-find-unexpected-consensus-expert-says/375105/16 Inter-Parliamentary Union,“Artificial Intelligence:Innovation in Parliaments,”14 February 2020,https:/www.ipu.org

51、/innovation-tracker/story/artificial-intelligence-innovation-in-parliaments 17 Deloitte,“Deloitte AI Institute:The Government and Public Services AI Dossier,”accessed 20 October 2022,https:/ 18 P.Amaresh,“Artificial Intelligence:A New Driving Horse in International Relations and Diplomacy,”Diplomati

52、st,13 May 2020,https:/ J.Snow,“This AI Traffic System in Pittsburgh Has Reduced Travel Time by 25%,”Smart Cities Dive,20 July 2017,https:/ medical services systems.20 Advanced analytics and open source software were used to reduce the response times of emergency medical services and to ensure equita

53、ble access to ambulance services across communities.To improve the maintenance and operation of roads and highways in China,machine learning and data-driven analysis were implemented to enhance defect detection capability,create a road defects management system and classify the defects found.21 In t

54、erms of policies related to fraud detection,the Federal Service for Veterinary and Phytosanitary Supervision of Russia has utilised AI technology to reveal counterfeit and falsified food products.22 To reduce the proportion of counterfeits and strengthen the traceability system,the agency developed

55、and implemented an AI-based technology to detect violations across the various stages of the production and movement of food products,by analysing veterinary certificates and processing a large volume of datasets to reveal suspicious patterns of falsifications.This innovation has protected consumers

56、 from purchasing potentially dangerous and low-quality products.1.5.Evaluation Policymakers must ensure that the policies they implement are indeed efficient and effective in achieving their desired objectives.AI could advance the evaluation stage of the policy cycle by providing faster and more acc

57、urate data that can assess the impact of policies.For example,the World Bank developed machine learning algorithms to quantify and evaluate the impact of trade agreements on trade flows.23 Some advantages of using AI in international trade research include improved data selection accuracy and the la

58、ck of a need for ad hoc assumptions in the aggregation process of individual provisions,providing more accurate and evidence-based analysis of impacts.20 M.Jachimowicz,M.Headley,and S.Bergmann,“Case Study:New Orleans Improves Public Safety by Integrating Administrative Data and Emergency Medical Ser

59、vices E pertise,”Results for America,2021,https:/results4america.org/wp-content/uploads/2018/01/Final-New-Orleans-Case-Study.pdf 21 OPSI,“SenseTraffic ICV Maint for Improving Highway and Road Maintenance Operations,”25 November 2021,https:/oecd-opsi.org/innovations/sensetraffic-icv-maint/22 OPSI,“Ar

60、tificial Intelligence Reveals Counterfeit and Falsified Products,”15 January 2021,https:/oecd-opsi.org/innovations/artificial-intelligence-reveals-counterfeit-and-falsified-products/23 H.Breinlich et al,“Machine Learning in International Trade Research:Evaluating the Impact of Trade Agreements,”Worl

61、d Bank Policy Research Paper 9629,2021,https:/openknowledge.worldbank.org/handle/10986/35451 5 AI can also be used to evaluate climate-related policies.Machine learning algorithms have been utilised to assess the effectiveness of carbon pricing in the United Kingdom.24 An advantage of using an AI-ba

62、sed model is that it can predict outcomes under the observed treatment(with carbon tax)as well as outcomes under the unobserved counterfactual intervention(no carbon tax),resulting in a more accurate and unbiased estimate of impacts.Machine learning and reinforcement learning models can also be used

63、 to assess the impact of education-related policies,25 interventions that support small and medium-sized enterprises26 and pandemic policies.27 In these cases,the use of AI improves the estimation of causal effects,enhances the credibility of policy analysis and raises the accuracy of predictions.2.

64、Understanding the Limitations and Risks As discussed,AI can and has been used to benefit policymaking in various ways.While AI can have immense power in data analysis and logic,policy-relevant concepts such as fairness,justice and equity are inherently human.Hence,the adoption of AI in policymaking

65、faces its own set of challenges since policies have widespread implications that can affect many human lives.It is also worth emphasising that incorporating AI into policymaking requires more thought and consideration compared to commercial applications:while consumers can generally opt out of comme

66、rcial AI applications,it is harder for stakeholders to avoid the impacts of policy.One way of framing the challenges related to AI is to group them into three categories:situation,program set-up,and data(Figure 2).Policymakers need to properly evaluate all of these areas when considering the adaptat

67、ion of AI for policymaking.Further,the interlinked nature of these three categories means that challenges marring any one of them will determine whether an AI-enabled policymaking process can achieve its intended results.24 J.Abrella,M.Kosch,and S.Rausch,“How Effective Is Carbon Pricing?A Machine Le

68、arning Approach to Policy Evaluation,”Journal of Environmental Economics and Management 112(2022),https:/doi.org/10.1016/j.jeem.2021.102589 25 M.Ballestar et al.,“A Novel Machine Learning Approach for Evaluation of Public Policies:An Application in Relation to the Performance of University Researche

69、rs,”Technological Forecasting and Social Change 149(2019),https:/doi.org/10.1016/j.techfore.2019.119756 26 G.Perboli et al.,“Using Machine Learning to Assess Public Policies:A Real Case Study for Supporting SMEs Development in Italy”(IEEE Technology&Engineering Management Figure 2:Factors influencin

70、g the adoption of artificial intelligence in policymaking Source:Authors.2.1.Situation:Is AI appropriate to use in this particular situation?AI has its own set of limitations,just like any other tool.Acknowledging and understanding these limitations is an important step for policymakers since that w

71、ill define whether AI can help achieve a particular result and,to what extent it can help achieve the result.For example,AI may improve the efficiency(speed)and objectivity of judicial decisions(strictly based on written law),but AI cannot yet replace a judges compassion or sense of justice since th

72、ese are inherently subjective.28 A more mundane e ample and arguably one of AIs greatest limitations is in its ability to make sense of human reality.This refers to understanding causality and cultural nuances,that is,unwritten rules that an average person would be able to comprehend and process.AI

73、could find these challenging because each rule may have a multitude of exceptions or connections with other nuances(not all of them rational)while statistical relationships do not provide discrete,semantically grounded representations to replicate a persons mental interpretation of objects.Interesti

74、ngly,even the Cyc database a project begun in 1984 that aims to give AI common sense by gathering data on how the world works and has coded close to 25 million nuances as of 2017 was not enough to impart AI with the common sense of an average person.29 Conference-Europe(TEMSCON-EUR),2021),https:/doi

75、.org/10.1109/TEMSCON-EUR52034.2021.9488581 27 S.Song et al.,“Pandemic Policy Assessment by Artificial Intelligence,”Scientific Reports 12,no.13843(2022),https:/doi.org/10.1038/s41598-022-17892-8 28 J.Kelly,“Commentary:Imperfect AI-driven Justice May Be Better than None at All,”CNA,20 September 2022,

76、https:/ 29 R.Toews,“What Artificial Intelligence Still Cant Do,”Forbes,1 June 2021,https:/ Is AI appropriateto use in thisparticular situation?Who developsthe AI?What data isprovided to the AI?6 These two examples illustrate an important point that policymakers should understand when determining the

77、 appropriateness of adopting AI in the context of policymaking:AI should not completely replace humans,or at least not at its current level of capability(see Box 1).After all,policymaking involves more than just economic logic and efficiency;it touches on governance and equity.In this context,it mak

78、es sense that policies that govern people should always be touched on by people.In fact,experience has already shown that leaving policymaking to unsupervised AI can lead to unintended or even harmful results.An example is the childcare benefits controversy in the Netherlands,30 which started when t

79、ax authorities adopted a self-learning algorithm to help identify likely fraudsters.While aimed at ensuring that benefits reach the intended beneficiaries,the self-learning algorithm in this case was essentially a black box,in that it was opaque on how the system improved itself.Issues such as insti

80、tutional biases,lack of transparency and inadequate checks and balances led the algorithm to misidentify legitimate beneficiaries as potential fraudsters,leading to costly litigation for people who can barely afford them.As an illustration of the algorithms shortcomings,Amnesty International reporte

81、d that having Turkish or Moroccan ethnicity or a non-Western appearance resulted in higher risk scores.31 Minor administrative errors,such as a missing signature or late payments,could also lead to being tagged as a potential fraudster.Beneficiaries who were misidentified as fraudsters ultimately ha

82、d their benefits suspended and had to repay previously enjoyed benefits immediately as a lump sum.In at least one case,the tax bill reached more than EUR 100,000 more than three times the average annual salary of a young adult aged 2529 working in the Netherlands.32 The misidentification caused tens

83、 of thousands of families to go into debt for years,pushing the intended policy beneficiaries into poverty,displacing thousands of children,and even causing deaths.This case emphasises the importance of recognising and understanding the limitations of AI.Decision makers such as ministers or leaders

84、may have practical policy implementation knowledge but they are not necessarily data scientists or AI 30 M.Heikkila,“Dutch Scandal Serves as a Warning for Europe over Risks of Using Algorithms,”29 March 2022,https:/www.politico.eu/article/dutch-scandal-serves-as-a-warning-for-europe-over-risks-of-us

85、ing-algorithms/31 Amnesty International,“Xenophobic Machines:Discrimination through Unregulated Use of Algorithms in the Dutch Childcare Benefits Scandal”(London:Amnesty International,2021),https:/www.amnesty.org/en/documents/eur35/4686/2021/en/experts.This means that decisions have to be considered

86、 holistically using a plethora of other inputs that complement AI-augmented insights.Otherwise,a decision maker may unwittingly misinterpret or more readily accept an AI-augmented insight due to epistemic trust;that is,the trust given by non-experts to acknowledged experts in the field.A lack of,or

87、limited,technical 32 Statista Research Department,“Average Annual Salary in the Netherlands 2021,by Age,”18 July 2022,https:/ 1:Artificial narrow intelligence(ANI)and artificial general intelligence(AGI)What many people think of as AI is often ANI,which refers to AI that can outperform a human on a

88、narrowly defined and structured task.Applications based on ANI do not think for themselves but simulate human behaviour based on a set of rules,parameters and contexts that they have been trained for.One example is a chatbot,which accomplishes a customer representatives basic and repetitive tasks an

89、d learns from repeated interactions,but cannot do other unrelated activities such as writing a news article or composing music.Thus,while an ANI enhances productivity and efficiency,it is constrained on what it can do.In contrast,AGI can potentially reason,think and learn just like a human.It could

90、even theoretically have human consciousness or self-awareness,if it is possible to define and code those attributes.Although some ongoing AI research has the goal of creating one,a fully capable AGI has arguably not yet been achieved because of the complexity of the human brain and the challenges of

91、 modelling it accurately(e.g.,replicating the formation of neural interconnections).At the minimum,AGI should hypothetically have the attributes usually associated with human intelligence such as common sense and abstraction,and different tests to confirm the achievement of AGI have been considered,

92、such as the Turing test.Current limitations notwithstanding,significant developments such as natural language processing and computer vision,complemented by continuous improvements in computing power,are starting to make AGI less like science fiction and more a real possibility.Source:Z.Larkin,“Gene

93、ral AI v Narrow AI,”Levity,5 October 2022,https:/levity.ai/blog/general-ai-vs-narrow-ai 7 understanding of how AI-augmented insights are generated could also lead to decisions that are based on incomplete or misinterpreted information.This raises epistemo-ethical constraints,which Babushkina and Vot

94、sis eloquently phrased as how should AI results be interpreted during decision-making when such a decision entails risk of harm and significant moral cost?33 Clearly,decision makers need to not just be careful about AI adoption but also be informed since their decisions have wide impacts.2.2.Program

95、 set-up:Who develops the AI?Another factor influencing AI adoption into policymaking is the program set-up.This involves not only the team responsible for developing the AI but also how it was developed.The development of AI solutions often involves a team composed of data scientists(e.g.,programmer

96、s,engineers and statisticians)and subject-matter experts(e.g.,economists,public health specialists or other specialists).Challenges can occur at every phase of the AI life cycle,which includes(1)design,data and modelling;(2)verification and validation;(3)deployment;and(4)operation and monitoring(Fig

97、ure 3).Who develops the AI solution matters because human factors such as biases,prejudices or experience can influence AI algorithms and models and,ultimately,the results,at each stage of the AI life cycle.For example,cognitive biases can be introduced,whether intentional or not,during the crucial

98、phase 1 when policymakers set objectives,data scientists process data,and specialists interpret models.These biases become even more relevant when one considers that the choice of AI model can have different outcomes for various demographic groups even if the same underlying data are used.34 Once fu

99、nctional,the AI model undergoes phase 2 of verification and validation.This involves executing the model and assessing whether its performance is aligned with policy objectives.Successfully conducting this step requires AI developers who are aware of the multidimensional and multisectoral nuances of

100、 the policy that they are supporting.After all,policymaking is not just a technical challenge but one that also involves considerations of law,social science,public health 33 D.Babushkina and A.Votsis,“Epistemo-ethical Constraints on AI-Human Decision Making for Diagnostic Purposes,”Ethics and Infor

101、mation Technology 24,no.22(2022),https:/doi.org/10.1007/s10676-022-09629-y 34 J.Z.Forde et al.,“Model Selections Disparate Impact in Real-World Deep Learning Applications,”arXiv:2104.00606v2 cs.LG,7 September 2021,https:/arxiv.org/pdf/2104.00606.pdf and ethics,among others.Amazon,for instance,found

102、during the verification and validation phase that their AI-driven recruitment engine was not rating candidates for certain posts in a gender-neutral way.35 Upon investigation,the models were found to have been trained to vet candidates by observing patterns in curricula vitae(CVs)submitted over a 10

103、-year period,which turned out to be mostly from men,thus biasing the results in favour of men.Figure 3:The AI life cycle Source:Adapted from Organisation for Economic Co-operation and Development(OECD),“Artificial Intelligence in Society”(Paris:OECD Publishing,2019),https:/doi.org/10.1787/eedfee77-e

104、n Moreover,while an effective multidimensional and multisectoral AI development team is necessary for the application of AI to policymaking,teams with diverse skillsets can be susceptible to communication challenges as members with multiple backgrounds can make conflict or miscommunication more like

105、ly.For instance,mismatched motivations or insufficient understanding of each others point of view can make it difficult for teams to communicate using the same language or to relay AI-augmented insights to decision makers.36 35 J.Dastin,“Amazon Scraps Secret AI Recruiting Tool that Showed Bias again

106、st Women,”Reuters,11 October 2018,https:/ 36 D.Piorkowski et al.,“How AI Developers Overcome Communication Challenges in a Multidisciplinary Team:A Case Study,”arXiv:2101.06098v1 cs.CY,13 January 2021,https:/arxiv.org/pdf/2101.06098.pdf Strategically process datain preparation for modelbuilding and

107、interpretation E ecute and assessperformance against objectives Quality checking to adhere withtechnical and managerial considerations Operate and continuouslyidentify issues for modelrecalibration or termination8 2.3.Data:What data is provided to the AI?Data serve as the lifeblood fuelling AI solut

108、ions.Data are needed for machine learning and for developing AI-formulated solutions.Data,however,can be vulnerable to at least three challenges:(1)infrastructure limitations;(2)structural biases;and(3)ethical concerns.Infrastructure limitations affect AI developers access to quality data,which can

109、affect phase 3 of the AI life cycle(deployment).For example,developers would need to have proper hardware,software,human capital,and training.There should also be proper processes and digital infrastructure in place to generate,collect,store and maintain data.The significant capital investment requi

110、red to do so implies that access to data and therefore use of AI could be monopolised by those with resources and large user bases.For example,the final model to train GPT-3,an AI language generator,has already cost OpenAI an estimated USD 12 million,37 and this is likely an underestimate given that

111、 there are also undetermined development costs and the cost of developing prototypes.As such,only a few firms would have the financial capacity to develop AI solutions.Further,the lack of transparency in proprietary AI also prevents peer review and replication.In fact,only 15 percent of AI studies h

112、ave shared their code,based on the State of AI report 2020.38 This is an important concern because models that work in a controlled environment may not necessarily work the same way in real-world application.Lack of transparency and peer review can also lead to ethical concerns,such as when data gat

113、hering methodologies violate privacy.For example,Clearview AI,which has been used in law enforcement,was found to have violated privacy laws in both Australia39 and Canada40 by gathering images from social media sites without users consent.Furthermore,historical data can reinforce structural biases(

114、e.g.,racism and sexism)if not utilised conscientiously and corrected for bias.The case of Amazon mentioned earlier is one example:the data 37 W.D.Heaven,“AI Is Wrestling with a Replication Crisis,”12 November 2020,https:/ OpenAI,“State of AI Report 2020”,2020,https:/ 39 N.Lomas,“Clearview AI Told It

115、 Broke Australias Privacy Law,Ordered to Delete Data,”TechCrunch,3 November 2021,https:/ Z.Whittaker,“Clearview AI Ruled Illegal by Canadian Privacy Authorities,”TechCrunch,4 February 2021,from the CVs reflect the historical inequity of womens access to digital skills,so the ensuing AI algorithm pro

116、vided results that were biased in favour of male candidates.Likewise,there have been reports in the US that some AI-enabled solutions used to assist criminal courts in determining the appropriate bail,sentences or judgment tended to reinforce racial prejudices in law enforcement data.41 In particula

117、r,Black defendants were being flagged as future criminals almost twice as much as White defendants.In another example,a 2019 study,also in the US,shows that a majority of the 189 facial recognition algorithms examined(some of which were being used by law enforcement authorities)had higher rates of m

118、isidentification for non-White faces.42 These examples highlight that data is a product of its methodology and circumstances.43 Data may incorporate biases due to factors such as technical methodology,sampling and non-sampling errors,or budget constraints leading to gaps in data coverage.AI solution

119、s based on data need to consider these limitations and,where necessary,correct for any methodological or structural bias.3.Developing the Policies and Frameworks AI is starting to be steadily applied to policymaking work,aiding policymakers in accomplishing specific tasks or analysing large volumes

120、of data.As the nascent technologies improve,the role of AI in policymaking would likely gain wider recognition and adoption.However,as with many technologies,AI is not a silver bullet.This is particularly so in the context of policymaking where a decision could have wide-ranging implications on soci

121、ety and the economy.It is thus imperative that there is a supportive environment to promote its responsible use and ensure that AI remains a tool for improving human and social welfare.Some policy approaches that policymakers can consider are discussed below.https:/ V.Polonski,“AI Is Convicting Crim

122、inals and Determining Jail Time,but Is It Fair?”World Economic Forum,19 November 2018,https:/www.weforum.org/agenda/2018/11/algorithms-court-criminals-jail-time-fair/42“Many Facial-Recognition Systems Are Biased,Says U.S.Study,”New York Times,19 December 2019,https:/ 43 S.Buranyi,“Rise of the Racist

123、 Robots How AI Is Learning All Our Worst Impulses,”Guardian,8 August 2017,https:/ 9 3.1.Establish AI governance frameworks As the earlier sections have clearly articulated,AI is a tool that could be employed to achieve different objectives,depending on the motivation of the users.However,there must

124、be guardrails to ensure its trustworthy,safe and responsible use.Economies could develop AI governance frameworks to provide clarity on its use and ensure that regulatory imperatives are met,while at the same time encourage innovation.Ideally,the frameworks should cover the entire life cycle of an A

125、I system(Figure 3).A good starting point would be the global agreement on the ethics of AI adopted by all 193 member economies of the United Nations Educational,Scientific and Cultural Organization(UNESCO).44 The text which aims to guide the formulation of the necessary legal frameworks to ensure th

126、e ethical development of AI sets the first global normative framework for AI while also allowing economies to apply it responsibly at the domestic level(see Box 2).Other references include the Organisation for Economic Co-operation and Development(OECD)Principles on Artificial Intelligence,45 Singap

127、ores Model Artificial Intelligence Governance Framework46 and the European Commissions Ethics Guidelines for Trustworthy AI.47 As operationalising these principles and frameworks could be challenging,economies could consider translating them into practical measures by which users could showcase and

128、deploy their adherence to the principles.Where possible,economies could also provide case studies and examples of how specific users have operationalised the principles in practice.48 Additionally,governments could encourage the private sector to develop their own self-regulatory mechanisms in the f

129、orm of,for example,codes of conduct,voluntary standards and best practices.3.2.Enhance digital ecosystems Good quality data is fundamental to AI adoption and use.Data serve as critical inputs for AI-based 44 United Nations(UN),“193 Countries Adopt First-ever Global Agreement on the Ethics of Artific

130、ial Intelligence,”UN News,25 November 2021,https:/news.un.org/en/story/2021/11/1106612 45 OECD,“OECD AI Principles Overview,”OECD.AI Policy Observatory,accessed 20 October 2022,https:/oecd.ai/en/ai-principles 46 Info-communications Media Development Authority(IMDA)and Personal Data Protection Commis

131、sion(PDPC),Singapore,“Model Artificial Intelligence Governance Framework,”2nd edition(Singapore:IMDA and PDPC,2020),https:/www.pdpc.gov.sg/-/media/Files/PDPC/PDF-Files/Resource-for-Organisation/AI/SGModelAIGovFramework2.pdf 47 European Commission,“Ethics Guidelines for Trustworthy AI,”2019,https:/ec

132、.europa.eu/futurium/en/ai-alliance-consultation.1.html 48 Examples include:World Economic Forum(WEF)and IMDA,“Companion to the Model AI Governance Framework Implementation and Self-Assessment Guide for Organizations,”(Geneva:WEF,2020),https:/www.pdpc.gov.sg/-/media/Files/PDPC/PDF-Files/Resource-for-

133、Organisation/AI/SGIsago.pdf;IMDA and PDPC,“Compendium of Use Cases:Practical Illustrations of the Model AI Governance Framework,”Vol.1 and Vol.2,2020,https:/www.pdpc.gov.sg/-/media/Files/PDPC/PDF-Files/Resource-for-Organisation/AI/SGAIGovUseCases.pdf,and https:/file.go.gov.sg/ai-gov-use-cases-2.pdf.

134、Box 2:UNESCO recommendation on the ethics of AI The recommendation comprises eight main sections,among them scope of application;aims and objectives;values and principles;areas of policy action;and monitoring and evaluation.The recommendation aims to provide a universal framework of values,principle

135、s and actions to guide economies in the formulation of legislation,policies or other instruments regarding AI consistent with international law.It also aims to foster multi-stakeholder,multidisciplinary and pluralistic dialogue and consensus building on ethical issues related to AI systems.The recom

136、mendation lists and expands on the values and principles that should be respected by all actors in the AI system life cycle.Examples of the values that are included are the importance of ensuring diversity and inclusiveness,and respect,protection and promotion of human rights.Examples of the princip

137、les mentioned are safety and security;fairness and non-discrimination;transparency and explainability;and responsibility and accountability.Importantly,the action-oriented policy chapters elaborate on what economies and various stakeholders should put in place to operationalise the stated values and

138、 principles.Policy areas include ethical impact assessment;ethical governance and stewardship;data policy;communication and information;gender;and education and research.The monitoring and evaluation section notes that economies should use a combination of quantitative and qualitative approaches to

139、credibly and transparently monitor and evaluate policies,programmes and mechanisms related to AI ethics.Source:UNESCO,“Draft Te t of the Recommendation on the Ethics of Artificial Intelligence,”25 June 2021,https:/unesdoc.unesco.org/ark:/48223/pf0000377897 10 analysis.Consequently,an enabling digita

140、l ecosystem must be in place for data to be collected and analysed.Economies would therefore need to tackle the digital divide at different levels.One would be to ensure universal and affordable material access(such as to smartphones and computers)and access to the internet.Doing so would require ec

141、onomies to,among others,provide incentives such as grants and subsidies to support first-time purchases,work with providers to widen their network coverage,and explore innovative approaches to provide access in underserved areas.A related aspect is to boost AI talents in the population by providing

142、courses aimed at imparting data science and AI skills in an inclusive manner.49 When AI skills are concentrated in one demographic,for example,rich,young males,structural biases can creep into AI development and policy application.More generally,economies would need to ensure that the population has

143、 the skills to use digital tools because they serve as the primary touchpoints for relevant data to be collected,analysed and utilised for various objectives including policymaking.Along with quality data and infrastructure,good data management and modelling skills are critical to successful AI adop

144、tion.The importance of data privacy and security notwithstanding,the value of data collected by one party could be optimised when they can be analysed collectively as part of a bigger pool of data for reasons such as representativeness and correlations.Therefore,economies would also need to ensure t

145、hat the data collected could be shared in a safe and secure manner between relevant parties,including among government agencies.At the same time,it should be recognised that training and optimising AI systems incentivise more data collection,and there is a need for open discussions on how this could

146、 be reconciled with principles such as data minimisation and consent.More broadly,digital ecosystems should consider the willingness of people to participate and live in a sensor-enhanced environment since that may limit the potentials of AI adoption.3.3.Build trust on AI adoption and use A primary

147、obstacle to AI adoption and use in policymaking is the level of trust that people have in AI-based decisions.For example,AI is perceived to be less sympathetic to soft factors such as 49 Examples include Retrain Canada(https:/ Singapores TechSkills Accelerator programmes.See IMDA,“TechSkills Acceler

148、ator(TeSA)”,accessed 20 October 2022,compassion or empathy than humans.It is therefore imperative that policymakers ensure that AI use remains human-centric.Policymakers need to carefully deliberate the appropriateness of various approaches of AI-augmented decision making(i.e.,human-out-of-the-loop,

149、human-over-the-loop,and human-in-the-loop),50 and the circumstances under which each approach would be applied.Risk assessments should be conducted and where a decision has a significant impact on people or whenever the harm caused by a wrong decision could be severe there should generally be more s

150、afeguards and human involvement.Policymakers should also be transparent on various aspects of AI use such as the basis behind AIs decision-making process,the extent of AI involvement and the reversibility of AI decisions.Even in circumstances where human-out-of-the-loop is arguably the preferred app

151、roach,economies could implement it gradually.For example,policymakers could begin with a pilot where decisions would be made independently by both officers and AI in parallel,and have the decisions compared and the model adjusted accordingly.It is also possible for the eventual process to be a hybri

152、d where simple cases are handled by AI,while complex cases are handled by humans.Policymakers would also need to build in mechanisms for citizen engagement to enable people to share their experience and to improve on the process.Despite the significant improvements and wider applications over the la

153、st few years,AI is a nascent technology.Although good AI governance frameworks could mitigate the potential risks,there remains opportunities for misuse and abuse,intentional or not.For example,the risks of discriminatory decisions from an AI model would increase if it is trained on biased,non-inclu

154、sive or non-representative data.While this should not inhibit adoption,it is important for users to recognise and proactively manage risks along with experiences and advancements in the technology.As an illustration,approaches to overcome discrimination include raising awareness and applying technic

155、al solutions to detect and correct algorithmic bias.3.4.Promote partnerships and collaborations Governments could be a trailblazer in the adoption of AI for policymaking and serve as an avenue to testbed,deploy and scale up AI solutions.A whole-https:/www.imda.gov.sg/imtalent/about-us/national-talen

156、t-development-initiatives/techskills-accelerator-tesa 50 For an elaboration of approaches,see IMDA and PDPC,Singapore,“Model Artificial Intelligence Governance Framework,”2nd edition,30.11 of-government approach,including establishing an inter-agency taskforce,could go a long way in advancing AI use

157、 in policymaking,as it is possible for different agencies to be collecting data that are specific only to their area of responsibilities.At the same time,it should be recognised that there are aspects of AI technology where the public sector could leverage the strengths of various other stakeholders

158、,including academia and the private sector,and foster partnerships and collaborations with them.For example,economies could invest in AI research and development by providing grants/incentives to institutes of higher learning to establish research programmes on AI governance and to launch technology

159、 centres/facilities focusing on data analytics.Economies could also introduce financing mechanisms to help start-ups focusing on AI to scale up.Additionally,they could set up advisory councils made up of experts and representatives from diverse fields such as law and ethics to advise on the use of A

160、I.Given the volume of data collected and the varied services provided by the public sector,governments could enhance access to different kinds of data(e.g.,energy,transport)via open data platforms.3.5.Leverage regional cooperation The adoption of AI varies across economies.Some economies have applie

161、d AI at a faster rate than others for various reasons,including the need to address structural issues such as labour constraints and ageing populations.The situation in the APEC region is no different,and the diversity provides the basis for economies to come together to share experiences and best p

162、ractices on AI use.It is important to have a multi-jurisdictional approach to AI adoption.For example,the need for improved access to AI-related goods and services would require economies to tackle tariffs and other barriers;the high volume of data needed to train AI would require better cross-borde

163、r data flows;and the specialised skills needed to operationalise AI systems would require attention to labour mobility.It should be noted that APEC is perhaps one of the most vibrant regions globally on the digital front:some of its members are among the first in the world to sign digital economy ag

164、reements.Indeed,the Digital Economy Partnership Agreement(DEPA)between Chile;New Zealand;and Singapore,51 as well as the Digital Economy Agreement between Australia and Singapore52 acknowledge the value of developing governance frameworks for AI technologies.51 Chile,New Zealand and Singapore,Digita

165、l Economy Partnership Agreement,11 June 2020,https:/www.mti.gov.sg/-/media/MTI/Microsites/DEAs/Digital-Economy-Partnership-Agreement/Digital-Economy-Partnership-Agreement.pdf At the fundamental level,there is an increased recognition that AI could have long-term social consequences.Some of them have

166、 started to play out,such as replacing human labour in certain tasks,and it is critical for economies to look into policies aimed at supporting the transitions,such as active labour market policies and lifelong learning programmes.Yet,there are others still in the realm of science fiction,such as th

167、e development of AGI,whose decision-making process would be too complex to be explainable.But,as with time,technology marches on and the policy discourse needs to catch up.It is high time to have global discussions on AI,and APEC as an incubator of ideas should step up and contribute to the discussi

168、ons.Andre Wirjo is Analyst,Sylwyn Calizo Jr.is Researcher,Glacer Nio Vasquez is Consultant,and Emmanuel A.San Andres is Senior Analyst at the APEC Policy Support Unit.The views expressed in this Policy Brief are those of the authors and do not represent the views of APEC member economies.This work i

169、s licensed under the Creative Commons Attribution-NonCommercialShareAlike 3.0 Singapore License.The APEC Policy Support Unit(PSU)is the policy research and analysis arm for APEC.It supports APEC members and fora in improving the quality of their deliberations and decisions and promoting policies tha

170、t support the achievement of APECs goals by providing objective and high quality research,analytical capacity and policy support capability.Address:35 Heng Mui Keng Terrace,Singapore 119616 Website:www.apec.org/About-Us/Policy-Support-Unit E-mail:psugroupapec.org APEC#222-SE-01.18 52 Australia and Singapore,AustraliaSingapore Digital Economy Agreement,6 August 2020,https:/www.dfat.gov.au/sites/default/files/australia-singapore-digital-economy-agreement.pdf

友情提示

1、下載報告失敗解決辦法
2、PDF文件下載后,可能會被瀏覽器默認打開,此種情況可以點擊瀏覽器菜單,保存網頁到桌面,就可以正常下載了。
3、本站不支持迅雷下載,請使用電腦自帶的IE瀏覽器,或者360瀏覽器、谷歌瀏覽器下載即可。
4、本站報告下載后的文檔和圖紙-無水印,預覽文檔經過壓縮,下載后原文更清晰。

本文(亞太經合組織(APEC):人工智能在經濟決策中的應用(2022)(英文版)(11頁).pdf)為本站 (無糖拿鐵) 主動上傳,三個皮匠報告文庫僅提供信息存儲空間,僅對用戶上傳內容的表現方式做保護處理,對上載內容本身不做任何修改或編輯。 若此文所含內容侵犯了您的版權或隱私,請立即通知三個皮匠報告文庫(點擊聯系客服),我們立即給予刪除!

溫馨提示:如果因為網速或其他原因下載失敗請重新下載,重復下載不扣分。
客服
商務合作
小程序
服務號
折疊
午夜网日韩中文字幕,日韩Av中文字幕久久,亚洲中文字幕在线一区二区,最新中文字幕在线视频网站