《Architecture options Cloud RAN vDU.pdf》由會員分享,可在線閱讀,更多相關《Architecture options Cloud RAN vDU.pdf(12頁珍藏版)》請在三個皮匠報告上搜索。
1、Open Edge server use in Cloud RANArchitecture options Cloud RAN vDUEdgeArchitecture options Cloud RAN vDUMika Hatanp,Head of R&D,NokiaArchitecture options Cloud RAN vDUCloud RAN has become a viable option in implementing radio access networks.In this presentation we will introduce Cloud RAN architec
2、ture and cover some of the deployment options of virtualized distributed unit,vDU.The presentation will cover how Open Edge hardware can flexibly support key acceleration technology alternatives for L1 processingCloud Radio Access Network,Cloud RANCloud RAN consists of Radio units(RU),communicating
3、wirelessly with end users(UE)Virtualized distributed units(vDU),processing real-time baseband functionsVirtualized central unit(vCU),processing non-real time baseband functionsUEUE10,000s1,000s100s10s#of sitesCloud RAN deployment optionsvDU is a micro-scale data center withCompute with hardware acce
4、leration StorageNetworkingCloud DRANvDU at cell site vCU at EdgeCloud CRAN vDU at Far Edge vCU at EdgeCloud CRAN vDU and vCU co-locatedat Far Edge vCUCell siteFar EdgeEdgeCorevDURUvCUvDURUFHFHMHBHBHBH250usec RTT20km4-10ms RTT200kmvCUvDURUMH3-10s cells100-1000s cellsCells/siteSmallest,1-2 serversSmal
5、ler,few racksMediumLargeFootprintvDU L1 accelerationProcessing the 5G physical layer(L1)of the front haul interface is very resource intensive.It is commonly acknowledged that hardware acceleration in L1 processing is beneficial from the TCO point of view.Offloading L1 tasks to a specialized acceler
6、ator unit frees expensive CPU resources for other tasks or enables the use of a lower end CPU.Accelerator functions are typically implemented as a PCIe-attached add-in cards in the vDU server.There are two main approaches to L1 HW acceleration,look-Aside and inline.A look-Aside accelerator works tog
7、ether with the CPU in processing the L1 functions.The CPU offloads selected functions to the accelerator,typically processing of the LDPC forward error correction.An inline accelerator processes the entire L1 layer,leaving the higher layers for the CPU.The CPU is offloaded to a larger degree,at the
8、expense of more complexity on the accelerator card.Look-Aside accelerationIn the Look-Aside architecture option the general-purpose computing CPU acts as the master for processing L1,with selected key functions(e.g.FEC)sent back-and-forth to the hardware accelerator.The hardware accelerator can be e
9、ither a separate PCIe card in the server,or be located on the same die(integrated“on-die”)alongside the CPU in which case it is no longer a general-purpose processor(GPP).Such a“GPP”would carry cost and power consumption overhead in any other application use case and should not even be called a GPP,
10、it is rather a custom SoC for Cloud RAN.In both of theseLook-Aside acceleration cases,the CPU still processes many of the L1 real-time computations,for which it is inefficient,as well as the L2 and L3 processing for which the GPP is better suited.In-Line accelerationWith the In-Line architecture opt
11、ion(referred to by some as the Full L1 Accelerator),all or part of the L1 processing is off-loaded from the CPU to a RAN SmartNIC PCIe card.In-Line acceleration SmartNICsuse dedicated and optimized silicon technology for L1 processing and fully relieve the general-purpose CPUs(GPPs)from the ultra-hi
12、gh L1 processing demands.This frees up valuable CPU resources enabling higher performance for L2 and L3 application processing.With an In-Line SmartNIC,less complex and less costly non-accelerated CPUs(GPPs)can be used for L2 and L3 processing where they are better suited.The efficiency,capacity and
13、 connectivity of this optimized solution is higher,and the power consumption lower than COTS server Look-Aside solutions can deliverSummaryTo conclude,In-Line acceleration with a Cloud RAN SmartNIC is seen to be most suitable for all RAN deployments,including the highest capacity mobile networks,whi
14、lst Look-Aside acceleration may be more suitable for lower capacity,higher latency mobile networks,if energy efficiency is of little concern.The advantages of In-Line acceleration technology extend to enabling higher capacity and connectivity,better energy efficiency,lower comparable TCO,ease of int
15、egration into any server and cloud environment,and flexibility in choosing higher layer(L2,L3)processing computing architectures and associated server hardware providers.Nokia:Cloud RAN:A Guide to Acceleration OptionsOpen edge in Cloud RANOpen edge solution can flexibly support any required accelera
16、tion technology in Cloud RAN and Edge sites:L1 low acceleration and CPRI eCPRI conversionLook-Aside accelerator for Cloud RANSeveral inline accelerators for Cloud RANGPGPU acceleration for AI/ML use casesEtc.Open edge is designed for far edge and cell sites:Thermally hardened(-5.+55 C)Compact form-f
17、actor(2U,3U,short depth,430 mm)Full front access,fully front serviceableNEBS Level 3 compliant(Thermal,EMC,Safety,Fire resistance,Seismic Zone 4 compliant)Join regular Open edge calls(under Telco project)White paper contribution:open-edge-use-case-cloud-ran-white-paper-v03-pdf(opencompute.org)Where to buy:https:/www.opencompute.org/products Project Wiki with latest specification:https:/www.opencompute.org/wiki/Telcos/Edge Mailing list:https:/ocp-all.groups.io/g/OCP-EdgeCall to ActionThank you!