Architecture options Cloud RAN vDU.pdf

編號:139969 PDF 12頁 728.42KB 下載積分:VIP專享
下載報告請您先登錄!

Architecture options Cloud RAN vDU.pdf

1、Open Edge server use in Cloud RANArchitecture options Cloud RAN vDUEdgeArchitecture options Cloud RAN vDUMika Hatanp,Head of R&D,NokiaArchitecture options Cloud RAN vDUCloud RAN has become a viable option in implementing radio access networks.In this presentation we will introduce Cloud RAN architec

2、ture and cover some of the deployment options of virtualized distributed unit,vDU.The presentation will cover how Open Edge hardware can flexibly support key acceleration technology alternatives for L1 processingCloud Radio Access Network,Cloud RANCloud RAN consists of Radio units(RU),communicating

3、wirelessly with end users(UE)Virtualized distributed units(vDU),processing real-time baseband functionsVirtualized central unit(vCU),processing non-real time baseband functionsUEUE10,000s1,000s100s10s#of sitesCloud RAN deployment optionsvDU is a micro-scale data center withCompute with hardware acce

4、leration StorageNetworkingCloud DRANvDU at cell site vCU at EdgeCloud CRAN vDU at Far Edge vCU at EdgeCloud CRAN vDU and vCU co-locatedat Far Edge vCUCell siteFar EdgeEdgeCorevDURUvCUvDURUFHFHMHBHBHBH250usec RTT20km4-10ms RTT200kmvCUvDURUMH3-10s cells100-1000s cellsCells/siteSmallest,1-2 serversSmal

5、ler,few racksMediumLargeFootprintvDU L1 accelerationProcessing the 5G physical layer(L1)of the front haul interface is very resource intensive.It is commonly acknowledged that hardware acceleration in L1 processing is beneficial from the TCO point of view.Offloading L1 tasks to a specialized acceler

6、ator unit frees expensive CPU resources for other tasks or enables the use of a lower end CPU.Accelerator functions are typically implemented as a PCIe-attached add-in cards in the vDU server.There are two main approaches to L1 HW acceleration,look-Aside and inline.A look-Aside accelerator works tog

7、ether with the CPU in processing the L1 functions.The CPU offloads selected functions to the accelerator,typically processing of the LDPC forward error correction.An inline accelerator processes the entire L1 layer,leaving the higher layers for the CPU.The CPU is offloaded to a larger degree,at the

8、expense of more complexity on the accelerator card.Look-Aside accelerationIn the Look-Aside architecture option the general-purpose computing CPU acts as the master for processing L1,with selected key functions(e.g.FEC)sent back-and-forth to the hardware accelerator.The hardware accelerator can be e

9、ither a separate PCIe card in the server,or be located on the same die(integrated“on-die”)alongside the CPU in which case it is no longer a general-purpose processor(GPP).Such a“GPP”would carry cost and power consumption overhead in any other application use case and should not even be called a GPP,

10、it is rather a custom SoC for Cloud RAN.In both of theseLook-Aside acceleration cases,the CPU still processes many of the L1 real-time computations,for which it is inefficient,as well as the L2 and L3 processing for which the GPP is better suited.In-Line accelerationWith the In-Line architecture opt

11、ion(referred to by some as the Full L1 Accelerator),all or part of the L1 processing is off-loaded from the CPU to a RAN SmartNIC PCIe card.In-Line acceleration SmartNICsuse dedicated and optimized silicon technology for L1 processing and fully relieve the general-purpose CPUs(GPPs)from the ultra-hi

12、gh L1 processing demands.This frees up valuable CPU resources enabling higher performance for L2 and L3 application processing.With an In-Line SmartNIC,less complex and less costly non-accelerated CPUs(GPPs)can be used for L2 and L3 processing where they are better suited.The efficiency,capacity and

13、 connectivity of this optimized solution is higher,and the power consumption lower than COTS server Look-Aside solutions can deliverSummaryTo conclude,In-Line acceleration with a Cloud RAN SmartNIC is seen to be most suitable for all RAN deployments,including the highest capacity mobile networks,whi

14、lst Look-Aside acceleration may be more suitable for lower capacity,higher latency mobile networks,if energy efficiency is of little concern.The advantages of In-Line acceleration technology extend to enabling higher capacity and connectivity,better energy efficiency,lower comparable TCO,ease of int

15、egration into any server and cloud environment,and flexibility in choosing higher layer(L2,L3)processing computing architectures and associated server hardware providers.Nokia:Cloud RAN:A Guide to Acceleration OptionsOpen edge in Cloud RANOpen edge solution can flexibly support any required accelera

16、tion technology in Cloud RAN and Edge sites:L1 low acceleration and CPRI eCPRI conversionLook-Aside accelerator for Cloud RANSeveral inline accelerators for Cloud RANGPGPU acceleration for AI/ML use casesEtc.Open edge is designed for far edge and cell sites:Thermally hardened(-5.+55 C)Compact form-f

17、actor(2U,3U,short depth,430 mm)Full front access,fully front serviceableNEBS Level 3 compliant(Thermal,EMC,Safety,Fire resistance,Seismic Zone 4 compliant)Join regular Open edge calls(under Telco project)White paper contribution:open-edge-use-case-cloud-ran-white-paper-v03-pdf(opencompute.org)Where to buy:https:/www.opencompute.org/products Project Wiki with latest specification:https:/www.opencompute.org/wiki/Telcos/Edge Mailing list:https:/ocp-all.groups.io/g/OCP-EdgeCall to ActionThank you!

友情提示

1、下載報告失敗解決辦法
2、PDF文件下載后,可能會被瀏覽器默認打開,此種情況可以點擊瀏覽器菜單,保存網頁到桌面,就可以正常下載了。
3、本站不支持迅雷下載,請使用電腦自帶的IE瀏覽器,或者360瀏覽器、谷歌瀏覽器下載即可。
4、本站報告下載后的文檔和圖紙-無水印,預覽文檔經過壓縮,下載后原文更清晰。

本文(Architecture options Cloud RAN vDU.pdf)為本站 (2200) 主動上傳,三個皮匠報告文庫僅提供信息存儲空間,僅對用戶上傳內容的表現方式做保護處理,對上載內容本身不做任何修改或編輯。 若此文所含內容侵犯了您的版權或隱私,請立即通知三個皮匠報告文庫(點擊聯系客服),我們立即給予刪除!

溫馨提示:如果因為網速或其他原因下載失敗請重新下載,重復下載不扣分。
客服
商務合作
小程序
服務號
折疊
午夜网日韩中文字幕,日韩Av中文字幕久久,亚洲中文字幕在线一区二区,最新中文字幕在线视频网站