《4-4 基于環境虛擬化的強化學習應用實踐.pdf》由會員分享,可在線閱讀,更多相關《4-4 基于環境虛擬化的強化學習應用實踐.pdf(31頁珍藏版)》請在三個皮匠報告上搜索。
1、基于環境虛擬化的強化學習應用實踐基于環境虛擬化的強化學習應用實踐俞揚南京大學/南棲仙策獎勵行動觀測強化學習通過與環境反復交互試錯,找到最優策略強化學習是機器學習中關于如何學習決策的分支人工智能機器學習監督學習人臉識別,圖像識別,統計預測強化學習AI圍棋,AI游戲無監督學習數據降維,數據壓縮,數據可視化Reinforcement Learning:About the intelligence of actionsAbout Reinforcement LearningJ()=Zxp(x)loss(x)dxSupervised learning objectiveJ()=ZTrap()R()dp(
2、)=p(s0)TYi=1p(si|ai,si?1)(ai|si?1)Reinforcement learning objectiveAgentEnvironmentaction/decisionrewardstateWhy SL has wide applicationsSL is much more data-drivenLess artificial,more applications“the actual contents of minds are tremendously,irredeemably complex;we should stop trying to find simple
3、 ways to think about the contents of minds We want AI agents that can discover like we can,not which contain what we have discovered.Building in our discoveries only makes it harder to see how the discovering process can be done.”Human-level Records of RL1992TD-Gammon2016AlphaGoDeep Q-Network2014Alp
4、haZero20182019AlphaStarMuZero20202020Agent57Industrial problem exampleHybrid Mode ControlData from a bad policyGlobal constraintDemands in industrial applications1.Trial-and-success3.Fully offline evaluation No errors Adaptive Performance expectation Confidence for going online4.Other challenges Cha
5、nging reward functions Mostly have no knowledge about RL for their decision-making tasks2.Very few data Decision data is always smallJ Degrave,et al.Magnetic control of tokamak plasmas through deep reinforcement learning,Nature 602:414419,2022.Recent application by DeepMindRecent application by Deep
6、Mind“We use a simulator that has enough physical fidelity to describe the evolution of plasma shape and current,while remaining sufficiently computationally cheap for learning”“This achievement required overcoming gaps in capability and infrastructure through scientific and engineering advances:1.an
7、 accurate,numerically robust simulator;2.an informed trade-off between simulation accuracy and computational complexity;3.a sensor and actuator model tuned to specific hardware control;4.realistic variation of operating conditions during training;5.a highly data-efficient RL algorithm that scales to
8、 high-dimensional problems;6.an asymmetric learning setup with an expressive critic but fast-to-evaluate policy;7.a process for compiling neural networks into real-time-capable code and deployment on a tokamak digital control system.J Degrave,et al.Magnetic control of tokamak plasmas through deep re
9、inforcement learning,Nature 602:414419,2022.A general development process in applications業務理解業務問題定義算法調優部署運行算法工程師/標注工程師運維算法工程師算法/模型動態調整數據處理數據算法工程師from Weinans talkfrom HLZhens talkData-driven RL:Offline RLDataPolicyTrainingNo models can be employed without validationsData-driven RL:Offline RLData-dri
10、ven RL:Offline RL1.Online selection w.r.t.deterministic policy2.Online selection vs.offline selection3.Impact of conservative dataDemands in industrial applications1.Trial-and-success3.Fully offline evaluation No errors Adaptive Performance expectation Confidence for going online Global constraints4
11、.Other challenges Changing reward functions Mostly have no knowledge about RL for their decision-making tasks2.Very few data Decision data is always smallNo useful simulators(even for many industrial tasks)Simulators/modelsLearning environment modelsHistorical action-response data:(s0,a0)(s1,a1)(s1,
12、a1).s1s2s3.Env modelProblem 1:Compounding errorHybrid Mode ControlRollout 1800 steps in the supervise-learned model realrolloutCompounding error is solved by distribution matchingMichael Kearns and Satinder Singh.Near-optimal reinforcement learning in polynomial time.Machine Learning,49(2-3):209232,
13、2002.Givenmaxs,akP(s,a)?P(s,a)k1 kVM?VMk1?2(1?)2For any policyeven a small step-wise error can lead to a large differenceTian Xu,Ziniu Li,Yang Yu.Error bounds of imitating policies and environments.NeurIPS 2020Model error:compounding error problem eliminatedBack to the ApplicationHybrid Mode Control
14、Rollouts in repeated experimentsBack to the ApplicationOptimize the mode control using RL in the learned environment modelReduce fuel consumption underthe same end-point battery levelTested fuel consumptionOld policy:4.68Optimized policy:4.56Explanation on thedecisionsOld policyOptimized policyProbl
15、em 2:execution biasXiong-Hui Chen,Yang Yu,Zheng-Mao Zhu,Zhihua Yu,Zhenjun Chen,Chenghe Wang,Yinan Wu,Hongqiu Wu,Rong-Jun Qin,Ruijin Ding,Fangsheng Huang.Adversarial Counterfactual Environment Model Learning,CoRR abs/2206.04890,2022.dosage response curves in 6 cities Problem 2:execution biasTraining
16、dataLearned modelsassasreal causal modelcausal model for collecting dataSimpsons paradoxhttps:/en.wikipedia.org/wiki/Simpson%27s_paradoxProblem 2:extremely simple case(1,-0.1,0.9)(0.9,-0.09,0.81)(0.81,-0.081,0.729)sampled data:(s,a,s)linear regressionsass=s+aa=-0.1 sAn infeasible solutiona=-0.1 s+0.
17、0001 rand()a=-0.1 s+0.001 rand()a=-0.1 s+0.01 rand()Is that fatal?this kind of curve is commonly found in control tasksthe curve of s:approaching certain valuesass=s+aa=-0.1 sSolution:Adversarial Counterfactual Risk MinimizationSupervised LearningIPWOur methodXiong-Hui Chen,Yang Yu,Zheng-Mao Zhu,Zhi
18、hua Yu,Zhenjun Chen,Chenghe Wang,Yinan Wu,Hongqiu Wu,Rong-Jun Qin,Ruijin Ding,Fangsheng Huang.Adversarial Counterfactual Environment Model Learning,CoRR abs/2206.04890,2022.Application resultsXiong-Hui Chen,Yang Yu,Zheng-Mao Zhu,Zhihua Yu,Zhenjun Chen,Chenghe Wang,Yinan Wu,Hongqiu Wu,Rong-Jun Qin,Ru
19、ijin Ding,Fangsheng Huang.Adversarial Counterfactual Environment Model Learning,CoRR abs/2206.04890,2022.More applications 1水泵效率建模流量效率實驗測試結果日常運營數據建模More applications 2Training dataRobustness test控制系統建模Categories of offline RLRLOffline RLModel-freeModel-basedBCQ:Scott Fujimoto,David Meger,and Doina P
20、recup.Off-policy deep reinforcement learning without exploration.ICML19CQL:Aviral Kumar,Aurick Zhou,George Tucker,and Sergey Levine.Conservative Q-learning for offline reinforcement learning.NIPS20Model-assistantFully model-basedVtaobao(AAAI19)、Vdidi(KDD19)1.Xiong-Hui Chen,Yang Yu,Zheng-Mao Zhu,Zhih
21、ua Yu,Zhenjun Chen,Chenghe Wang,Yinan Wu,Hongqiu Wu,Rong-Jun Qin,Ruijin Ding,Fangsheng Huang.Adversarial Counterfactual Environment Model Learning,CoRR abs/2206.04890,20222.Tian Xu,Ziniu Li,Yang Yu.Error bounds of imitating policies and environments.In:Advances in Neural Information Processing Syste
22、ms 33(NeurIPS20),Virtual Conference,2020.MOPO:T.Yu,G.Thomas,L.Yu,S.Ermon,J.Y.Zou,S.Levine,C.Finn,and T.Ma.MOPO:Model-based offline policy optimization.Advances in Neural Information Processing Systems,33:14129-14142,2020.COMBO:T.Yu,A.Kumar,R.Rafailov,A.Rajeswaran,S.Levine,and C.Finn.Combo:Conservative offline model-based policy optimization.Advances in Neural Information Processing Systems,34,2021.Environment LearningVirtual worldReal-worldDataData-driven RL for Real-world Decision-makingRL Solver謝謝!Tasks in most RL papers todayTasks we are solving by RL now