4-4 基于環境虛擬化的強化學習應用實踐.pdf

編號:102357 PDF 31頁 20.23MB 下載積分:VIP專享
下載報告請您先登錄!

4-4 基于環境虛擬化的強化學習應用實踐.pdf

1、基于環境虛擬化的強化學習應用實踐基于環境虛擬化的強化學習應用實踐俞揚南京大學/南棲仙策獎勵行動觀測強化學習通過與環境反復交互試錯,找到最優策略強化學習是機器學習中關于如何學習決策的分支人工智能機器學習監督學習人臉識別,圖像識別,統計預測強化學習AI圍棋,AI游戲無監督學習數據降維,數據壓縮,數據可視化Reinforcement Learning:About the intelligence of actionsAbout Reinforcement LearningJ()=Zxp(x)loss(x)dxSupervised learning objectiveJ()=ZTrap()R()dp(

2、)=p(s0)TYi=1p(si|ai,si?1)(ai|si?1)Reinforcement learning objectiveAgentEnvironmentaction/decisionrewardstateWhy SL has wide applicationsSL is much more data-drivenLess artificial,more applications“the actual contents of minds are tremendously,irredeemably complex;we should stop trying to find simple

3、 ways to think about the contents of minds We want AI agents that can discover like we can,not which contain what we have discovered.Building in our discoveries only makes it harder to see how the discovering process can be done.”Human-level Records of RL1992TD-Gammon2016AlphaGoDeep Q-Network2014Alp

4、haZero20182019AlphaStarMuZero20202020Agent57Industrial problem exampleHybrid Mode ControlData from a bad policyGlobal constraintDemands in industrial applications1.Trial-and-success3.Fully offline evaluation No errors Adaptive Performance expectation Confidence for going online4.Other challenges Cha

5、nging reward functions Mostly have no knowledge about RL for their decision-making tasks2.Very few data Decision data is always smallJ Degrave,et al.Magnetic control of tokamak plasmas through deep reinforcement learning,Nature 602:414419,2022.Recent application by DeepMindRecent application by Deep

6、Mind“We use a simulator that has enough physical fidelity to describe the evolution of plasma shape and current,while remaining sufficiently computationally cheap for learning”“This achievement required overcoming gaps in capability and infrastructure through scientific and engineering advances:1.an

7、 accurate,numerically robust simulator;2.an informed trade-off between simulation accuracy and computational complexity;3.a sensor and actuator model tuned to specific hardware control;4.realistic variation of operating conditions during training;5.a highly data-efficient RL algorithm that scales to

8、 high-dimensional problems;6.an asymmetric learning setup with an expressive critic but fast-to-evaluate policy;7.a process for compiling neural networks into real-time-capable code and deployment on a tokamak digital control system.J Degrave,et al.Magnetic control of tokamak plasmas through deep re

9、inforcement learning,Nature 602:414419,2022.A general development process in applications業務理解業務問題定義算法調優部署運行算法工程師/標注工程師運維算法工程師算法/模型動態調整數據處理數據算法工程師from Weinans talkfrom HLZhens talkData-driven RL:Offline RLDataPolicyTrainingNo models can be employed without validationsData-driven RL:Offline RLData-dri

10、ven RL:Offline RL1.Online selection w.r.t.deterministic policy2.Online selection vs.offline selection3.Impact of conservative dataDemands in industrial applications1.Trial-and-success3.Fully offline evaluation No errors Adaptive Performance expectation Confidence for going online Global constraints4

11、.Other challenges Changing reward functions Mostly have no knowledge about RL for their decision-making tasks2.Very few data Decision data is always smallNo useful simulators(even for many industrial tasks)Simulators/modelsLearning environment modelsHistorical action-response data:(s0,a0)(s1,a1)(s1,

12、a1).s1s2s3.Env modelProblem 1:Compounding errorHybrid Mode ControlRollout 1800 steps in the supervise-learned model realrolloutCompounding error is solved by distribution matchingMichael Kearns and Satinder Singh.Near-optimal reinforcement learning in polynomial time.Machine Learning,49(2-3):209232,

13、2002.Givenmaxs,akP(s,a)?P(s,a)k1 kVM?VMk1?2(1?)2For any policyeven a small step-wise error can lead to a large differenceTian Xu,Ziniu Li,Yang Yu.Error bounds of imitating policies and environments.NeurIPS 2020Model error:compounding error problem eliminatedBack to the ApplicationHybrid Mode Control

14、Rollouts in repeated experimentsBack to the ApplicationOptimize the mode control using RL in the learned environment modelReduce fuel consumption underthe same end-point battery levelTested fuel consumptionOld policy:4.68Optimized policy:4.56Explanation on thedecisionsOld policyOptimized policyProbl

15、em 2:execution biasXiong-Hui Chen,Yang Yu,Zheng-Mao Zhu,Zhihua Yu,Zhenjun Chen,Chenghe Wang,Yinan Wu,Hongqiu Wu,Rong-Jun Qin,Ruijin Ding,Fangsheng Huang.Adversarial Counterfactual Environment Model Learning,CoRR abs/2206.04890,2022.dosage response curves in 6 cities Problem 2:execution biasTraining

16、dataLearned modelsassasreal causal modelcausal model for collecting dataSimpsons paradoxhttps:/en.wikipedia.org/wiki/Simpson%27s_paradoxProblem 2:extremely simple case(1,-0.1,0.9)(0.9,-0.09,0.81)(0.81,-0.081,0.729)sampled data:(s,a,s)linear regressionsass=s+aa=-0.1 sAn infeasible solutiona=-0.1 s+0.

17、0001 rand()a=-0.1 s+0.001 rand()a=-0.1 s+0.01 rand()Is that fatal?this kind of curve is commonly found in control tasksthe curve of s:approaching certain valuesass=s+aa=-0.1 sSolution:Adversarial Counterfactual Risk MinimizationSupervised LearningIPWOur methodXiong-Hui Chen,Yang Yu,Zheng-Mao Zhu,Zhi

18、hua Yu,Zhenjun Chen,Chenghe Wang,Yinan Wu,Hongqiu Wu,Rong-Jun Qin,Ruijin Ding,Fangsheng Huang.Adversarial Counterfactual Environment Model Learning,CoRR abs/2206.04890,2022.Application resultsXiong-Hui Chen,Yang Yu,Zheng-Mao Zhu,Zhihua Yu,Zhenjun Chen,Chenghe Wang,Yinan Wu,Hongqiu Wu,Rong-Jun Qin,Ru

19、ijin Ding,Fangsheng Huang.Adversarial Counterfactual Environment Model Learning,CoRR abs/2206.04890,2022.More applications 1水泵效率建模流量效率實驗測試結果日常運營數據建模More applications 2Training dataRobustness test控制系統建模Categories of offline RLRLOffline RLModel-freeModel-basedBCQ:Scott Fujimoto,David Meger,and Doina P

20、recup.Off-policy deep reinforcement learning without exploration.ICML19CQL:Aviral Kumar,Aurick Zhou,George Tucker,and Sergey Levine.Conservative Q-learning for offline reinforcement learning.NIPS20Model-assistantFully model-basedVtaobao(AAAI19)、Vdidi(KDD19)1.Xiong-Hui Chen,Yang Yu,Zheng-Mao Zhu,Zhih

21、ua Yu,Zhenjun Chen,Chenghe Wang,Yinan Wu,Hongqiu Wu,Rong-Jun Qin,Ruijin Ding,Fangsheng Huang.Adversarial Counterfactual Environment Model Learning,CoRR abs/2206.04890,20222.Tian Xu,Ziniu Li,Yang Yu.Error bounds of imitating policies and environments.In:Advances in Neural Information Processing Syste

22、ms 33(NeurIPS20),Virtual Conference,2020.MOPO:T.Yu,G.Thomas,L.Yu,S.Ermon,J.Y.Zou,S.Levine,C.Finn,and T.Ma.MOPO:Model-based offline policy optimization.Advances in Neural Information Processing Systems,33:14129-14142,2020.COMBO:T.Yu,A.Kumar,R.Rafailov,A.Rajeswaran,S.Levine,and C.Finn.Combo:Conservative offline model-based policy optimization.Advances in Neural Information Processing Systems,34,2021.Environment LearningVirtual worldReal-worldDataData-driven RL for Real-world Decision-makingRL Solver謝謝!Tasks in most RL papers todayTasks we are solving by RL now

友情提示

1、下載報告失敗解決辦法
2、PDF文件下載后,可能會被瀏覽器默認打開,此種情況可以點擊瀏覽器菜單,保存網頁到桌面,就可以正常下載了。
3、本站不支持迅雷下載,請使用電腦自帶的IE瀏覽器,或者360瀏覽器、谷歌瀏覽器下載即可。
4、本站報告下載后的文檔和圖紙-無水印,預覽文檔經過壓縮,下載后原文更清晰。

本文(4-4 基于環境虛擬化的強化學習應用實踐.pdf)為本站 (云閑) 主動上傳,三個皮匠報告文庫僅提供信息存儲空間,僅對用戶上傳內容的表現方式做保護處理,對上載內容本身不做任何修改或編輯。 若此文所含內容侵犯了您的版權或隱私,請立即通知三個皮匠報告文庫(點擊聯系客服),我們立即給予刪除!

溫馨提示:如果因為網速或其他原因下載失敗請重新下載,重復下載不扣分。
客服
商務合作
小程序
服務號
折疊
午夜网日韩中文字幕,日韩Av中文字幕久久,亚洲中文字幕在线一区二区,最新中文字幕在线视频网站