《UCS 服務器與 UCS 第五代虛擬接口卡的I-O連接.pdf》由會員分享,可在線閱讀,更多相關《UCS 服務器與 UCS 第五代虛擬接口卡的I-O連接.pdf(92頁珍藏版)》請在三個皮匠報告上搜索。
1、#CiscoLive#CiscoLiveEldho Jacob,Product ManagerBRKCOM-2295UCS I/O connectivity with 5thGeneration VIC card 2023 Cisco and/or its affiliates.All rights reserved.Cisco Public#CiscoLiveEnter your personal notes hereCisco Webex App 3Questions?Use Cisco Webex App to chat with the speaker after the sessio
2、nFind this session in the Cisco Live Mobile AppClick“Join the Discussion”Install the Webex App or go directly to the Webex spaceEnter messages/questions in the Webex spaceHowWebex spaces will be moderated by the speaker until June 9,2023.12343https:/ 2023 Cisco and/or its affiliates.All rights reser
3、ved.Cisco PublicBRKCOM-2295Agenda 2023 Cisco and/or its affiliates.All rights reserved.Cisco PublicIntroductionServer I/O Connectivity and Bandwidth VIC FeaturesVIC PerformanceConclusionBRKCOM-22954 2023 Cisco and/or its affiliates.All rights reserved.Cisco Public#CiscoLiveUCS Virtual Interface Card
4、5BRKCOM-2295Adapter VirtualizationStateless and policy drivenEthernetStorage(FC,NVMeoF etc)UCS VIC FamilyUCS VIC FamilyUCS ServersUCS Servers 2023 Cisco and/or its affiliates.All rights reserved.Cisco Public#CiscoLiveLANLANSANSANFEXFabric Interconnect(FI)10/25/40/100GUCS Manager UCS Unified FabricUC
5、S Unified FabricUCS VIC Topology 6BRKCOM-2295SimplicityFlexibleTCO ReductionLAN/LAN/SANSAN10/25/40/50100/200GCisco IMCVIC in Standalone FabricsVIC in Standalone FabricsvHBAsManagementEthernetStorage/FCvNICsVICAppear as physical NICs and HBAs to host OSFEX/IFM 2023 Cisco and/or its affiliates.All rig
6、hts reserved.Cisco Public#CiscoLiveUnified Fabric:Chassis EnvironmentBRKCOM-22957SAN BMgmtSAN ALANUCS Blade Solution 2023 Cisco and/or its affiliates.All rights reserved.Cisco Public#CiscoLiveUnified Fabric:Rack Server Environment1x Management2x Production Data1x Production Data1x vMotion1x VM Conso
7、le1x VMkernel2x Fibre Channel HBAFibre Channel Top-of-Rack SwitchesEthernet Top-of-Rack SwitchesEthernet Management Network SwitchTraditional Rack-serverUCS SolutionMuliple NIC and HBACisco UCS Fabric Interconnects2x 10/25/40/50/10010/25/40/50/100-Gbps Unified Fabric for Data,Storage and Management
8、On-demand vNICs for Mgmt,Prod Data,VmMgmt,NVMEoF,iSCSI etcOn-demand multiple vHBAs for redundancySAN BMgmtSAN ALANBRKCOM-22958 2023 Cisco and/or its affiliates.All rights reserved.Cisco Public#CiscoLiveUCS VIC Additional Options:Rack Server EnvironmentUCS Fabric Interconnect or Nexus N9K2x 10/2510/2
9、5-Gbps Unified Fabric for Data and Management On-demand vNICs for Mgmt,Prod Data,VmMgmt,NVMEoF,iSCSI etcOn-demand multiple vHBAsFEX for rack-server scalabilityNexus FEXNexus N9K or ToR Switch2x 10/25/40/50/100/20010/25/40/50/100/200-Gbps Unified Fabric for Data and Management On-demand vNICs for Mgm
10、t,Prod Data,VmMgmt,NVMEoF,iSCSI etcOn-demand multiple FCoEvHBAsBRKCOM-22959 2023 Cisco and/or its affiliates.All rights reserved.Cisco Public#CiscoLive10BRKCOM-2295UCS Fabric Innovation Cadence&Leadership201120112016201620222022201820182020202020192019202120212009200915000 Series(10/25/40/50/100/200
11、G)100G Server and Uplink32G FC uplinkPCIe Gen4Geneve perfRoCEv2 perfECN,PTPv2SR-IOV,SIOV,RSSv216K Rx ring-sizeFabric Fabric InterconnectInterconnectIOM/IOM/IFM/FEXIFM/FEX1 1ststGenGen6248/62961200 Series(10/40G)IOM 2408IFM X-9108 25G10G Server/Uplink8G FC uplinkPCIe Gen2256 PCIe deviceFlow Classific
12、ationSingle Wire MgmtVM-FEXUsNIC,RSS,NetQueue10/40G Server/Uplink16G FC uplinkUCS Mini(6324)PCIe Gen3NVGRE/VxLANVMQ,DPDKROCEv140/100G uplink10/25G Server32G FC uplink25G N9K FEXFC slow drainNVGRE/VXLAN/GENEVENVMEoF(FC-NVMe,ROCEv2),VMMQ6324FI 6120/61402 2ndndGenGen3 3rdrdGenGen4 4ththGenGen6332/6332-
13、16UPConverged FC,Eth10G Server/Uplink8G FC uplinkPCIe Gen1128 PCIe deviceHypervisor Bypass for ESX,KVM(VM-FEX)IOM 2304IOM 2204/2208IOM 2104VICVIC1300 Series(10/40G)5 5ththGenGen64541400/14000 Series(10G/25G/40G/100G)641086536M81KR,P81E(10G)14425/14825X-Series Support100G per VIC200G per X210cFEX 931
14、80YC-FX3FEX 2348FEX 22321st with.1st with.Compute Converged Fabricpowered by Dynamic IOV1st with1st with40G AdapterSingle Wire MgmtCommon Fabric for Blade and Rack1st with1st withEnd to End 40G convergedMulti host adapterTransition to 25G Fabric1 1ststwith.with.E2E 100G Fabric for bladesPushing Perf
15、ormance Envelope1 1ststwith with Cloud operated compute Fabric for blades Currently ShippingIFM X-9108 100G 2023 Cisco and/or its affiliates.All rights reserved.Cisco Public#CiscoLiveCisco Virtual Interface Card(VIC)Innovation 11BRKCOM-22951stGen VIC2010 M81KR,P81E10GbE16x PCIe Gen1128 PCIe devicesH
16、ypervisor bypass for ESX,KVM5thGen VIC2022VIC 15000 Series10/25/40/50/100/200 GbEPCIe Gen 4DDR4NVGRE/VXLAN/GENEVENVMEoF(ROCEv2,NVMe/FC)PTP,QinQ,Phy NIC modeEnhanced QoSLatency 1.0usSR-IOV2ndGen VIC2012-2014VIC 1225,1225,1227,1227T,1240,1280,128510/40GbE10GBase-T16x PCIe Gen 2256 PCIe devicesSinge wi
17、re managementSR-IOV for Win 2012,HyperVUSNICNetFlow3rdGen VIC2014-2016VIC 1340,1380,1385,138740GbEPCIe Gen 3NVGRE/VXLANRoCEv14thGen VIC2018-2021VIC 1440,1455,1457,1480,1495,1497,1467,1477,14425,1482510/25/40/100 GbEPCIe Gen 3NVGRE/VXLAN/GENEVENVMeoF(RoCEv2)FC-NVMeTCAM filters 2023 Cisco and/or its a
18、ffiliates.All rights reserved.Cisco Public5th Gen VIC card for X-,B-,C-SeriesSupports 10G/25G/40G/50G/100G/200GCNA,Single Wire MgmtDynamic FC and Ethernet virtual interfacesx16 PCIe Gen 4NVMeoF:FC-NVMe,RoCEv2 Overlays:NVGRE,VXLAN,GENEVERSS,NetQueue,VMQ,VMMQ,RSSv2SR-IOV,SIOV*,usNIC,DPDKPTPv2,L3ECN*,1
19、6K Rx Ring SizeQinQ Tunneling,Physical NIC modeSecure boot for VIC 15420,15422,15235,1542515000 series VICsVIC 15231,VIC 15428 with 4.2(2)VIC 15411,VIC 15238 with 4.2(3)VIC 15420,VIC 15422 with 4.3(1)VIC 15235,VIC 15425 with 4.3(2)VIC 15000 Series*HW capable*HW capableVIC 15411VIC 1541110/40G mLOM(B
20、200-M6)VIC 15231VIC 152312x100G mLOM(X210c-M6/M7)*200G VIC*200G VICVIC 15420VIC 154204x25G mLOM(X210c-M6/M7)VIC 15422VIC 154224x25G Mezz(X210c-M6/M7)BRKCOM-229512VIC 15428VIC 1542810/25/50G mLOM(M6/M7 C-Series)VIC 15238VIC 1523840/100/200G mLOM(M6/M7 C-Series)*200G VIC*200G VIC*200G VIC*200G VICVIC
21、15235VIC 1523540/100/200G PCIe(M6/M7 C-Series)VIC 15425VIC 1542510/25/50G PCIe(M6/M7 C-Series)2023 Cisco and/or its affiliates.All rights reserved.Cisco Public#CiscoLiveVIC 15000 Series for X-Series and C-SeriesVIC 15231VIC 15231(UCSX(UCSX-MLML-V5D200G)V5D200G)VIC 15428VIC 15428(UCSC(UCSC-MM-V5Q50G)
22、V5Q50G)VIC 15411VIC 15411(UCSB(UCSB-MLML-V5Q10G)V5Q10G)VIC 15238VIC 15238(UCSC(UCSC-MM-V5D200G)V5D200G)VIC 15420VIC 15420(UCSX(UCSX-MLML-V5Q50G)V5Q50G)VIC 15422VIC 15422(UCSX(UCSX-MEME-V5Q50G)V5Q50G)VIC 15425VIC 15425(UCSC(UCSC-P P-V5Q50G)V5Q50G)VIC 15235VIC 15235(UCSC(UCSC-P P-V5D200G)V5D200G)Serve
23、r supportX210c M6/M7M6/M7 C-SeriesB200-M6M6/M7 C-SeriesX210c M6/M7X210c M6/M7M6/M7 C-SeriesM6/M7 C-SeriesSpeed100G10/25/50G10/40G40/100/200G25G25G10/25/50G40/100/200GMax Ports24224442Form FactormLOMmLOMmLOMmLOMmLOMMezzPCIePCIeFI Series6400/65366300/6400/65366300/6400/65366300/65366400/65366400/65366
24、300/6400/65366300/6536IOM/IFM/FEXIFM-25G/IFM-100G93180YC-FX32348-UPQIOM 2204/2208/2304/2408-IFM-25G/IFM-100GIFM-25G/IFM-100G93180YC-FX32348-UPQ-ChassisX9508-5108-X9508X9508-Supported Release4.2(2)4.2(2)4.2(3)4.2(3)4.3(1)4.3(1)4.3(2)4.3(2)BRKCOM-229513Blade Connectivity 2023 Cisco and/or its affiliat
25、es.All rights reserved.Cisco Public#CiscoLiveX-Series:VIC-15000,IFM-100G,FI-6536 ConnectivityX-Series 9508 chassis can have 8x 100G-KR or 32x 25G-KR ethernet connections between an IFM-100G or IFM-25G and eight X210c compute node.Two interface-types on IFMNetwork Interface(Network Interface(NIF),int
26、erface on IFM which connect to Fabric-Interconnect(blueblue-linkslinks),25/100Gbps depending on the IFM.Host Interface(Host Interface(HIF),internal ports on IFM that connect to blade-server(redred-linkslinks).HIF can be 25G or 100G depending on IFM&VIC.The HIF port speed of IFMIFM-100G100G to each X
27、210c,can be 1x 100G or 4x 25G depending on the inserted VIC100G100G-KR KR(VIC 15231)4x 25G4x 25G-KR KR(VIC 15420/15422)The HIF port speed of IFMIFM-25G 25G to each X210c,is always 25G for all VICs.X210c compute node can have one or two VICs&the following combinationsMLOM MLOM(15231 or 15420)MLOM+PCI
28、e MLOM+PCIe(15420+15422)Depending on the Fabric-Interconnect,IFM,and VIC.The vNIC on a VIC adapter will see 50G or 100G total bandwidth and single traffic flow max of 25G or 100G.UCS X-Series Chassis with X9108-IFM-100GUCS FI 6536 L1/L2 linksUCS FI 6536 1-to-8 x100GbE 1-to-8 x100GbE 2x100G-KR or 8x2
29、5G-KR internal connections to each X210c S SE ER RV VE ER R1 1BRKCOM-229515IFM-100GIFM-100G 2023 Cisco and/or its affiliates.All rights reserved.Cisco Public#CiscoLiveX-Series:VIC-15000,IFM-25G,FI-6536/6400 Series ConnectivityUCS X-Series Chassis with X9108-IFM-25G8x25G-KR internal connections to ea
30、ch X210c UCS FI 6536 L1/L2 linksUCS FI 6536 1-to-8 x25GbE 1-to-8 x25GbE S SE ER RV VE ER R1 1UCS X-Series Chassis with X9108-IFM-25G8x25G-KR internal connections to each X210c UCS FI 6400 SeriesL1/L2 linksUCS FI 6400 Series1-to-8 x25GbE 1-to-8 x25GbE S SE ER RV VE ER R1 1BRKCOM-229516 2023 Cisco and
31、/or its affiliates.All rights reserved.Cisco Public12345Programmable virtual Interfaces vNICvNICs,vHBAvHBAsBound to a logical uplinkP1 P2 P3 P4vNIC with Fabric FailovervNIC with Fabric FailoverPrimary Path&Secondary PathVIC logical uplink1VIC logical uplink1IFM-25G:2x25G PoIFM-100G:100G-KR4Logical u
32、plink2Logical uplink2Backplane ports to IFMIFM-25G:P1,P2,P3,P4 25GIFM-100G:P1,P2 100G 25GKR25GKR25GKR25GKR25GKR25GKR25GKR25GKRFabric A:IFM-100GFabric B:IFM-100GFabric A:IFM-25GFabric B:IFM-25GP1P2VIC 15231MLOM VIC for X210c-M6/M7 with FI 6400/6536Connectivity of 25G or 100G from VIC depending on IFM
33、1 port of 100G-KR4 enabled with an IFM-100G2 ports of 25G-KR enabled with an IFM-25GWith IFM-100G the logical uplink is a 100G interfacevNIC/vHBA bound to logical uplink will be 100GvNIC/vHBA can do single-flow of 100GbpsWith IFM-25G the logical uplink is a 50G port-channelvNIC/vHBA bound to logical
34、 uplink will be 50GvNIC/vHBA can do single-flow of 25GbpsvNIC/vHBA defined by LAN/SAN connectivity policyNo default vNICs/vHBAsBRKCOM-229517 2023 Cisco and/or its affiliates.All rights reserved.Cisco PublicVIC 15420 and 15422VIC 15420:MLOM VIC for X210c-M6/M7VIC 15422:Mezz VIC for X210c-M6/M7Also pr
35、ovides 2 x16 PCIe Gen4 connectivity to XFM module for X440p PCIe node Connectivity of 2x 25G irrespective of IFM-25G/IFM-100G2 ports of 25G-KR enabled per IFM-25G/IFM-100G50G uplink port-channel across 2x 25G-KR per IFMvNIC and vHBA bound to the uplink are 50GvNIC/vHBA can do single-flow of 25Gbps&a
36、ggregate of 50GvNIC/vHBA defined by LAN/SAN connectivity policy in IntersightNo default vNICs/vHBAs1Programmable virtual Interfaces vNICs or vHBAs are bound to a physical port and vNIC/vHBA speed equals to speed of physical portvNIC with Fabric FailoverPrimary path&Secondary path4x 25GBASE-KRUnified
37、 network fabric2x25G to each IFM2 3 450-GbpsBW to fabric A50-GbpsBW to fabric BCisco UCSCisco UCSVIC 15420VIC 15420or VIC 15422or VIC 15422512 programmablevirtual interfacesEthernet NICsFibre Channel HBAsMezzanine card form factor05122x 25G port channel25GKR25GKR25GKR25GKRBRKCOM-229518 2023 Cisco an
38、d/or its affiliates.All rights reserved.Cisco Public#CiscoLiveThroughput per UCS x210c compute nodex210c Compute x210c Compute NodeNodeFIFI-6536+6536+X9108X9108-IFMIFM-100G100GFIFI-6536/64006536/6400-Series+Series+X9108X9108-IFMIFM-25G25GFIFI-6536+6536+X9108X9108-IFMIFM-25G/100G25G/100GOr Or FIFI-64
39、006400-Series+Series+X9108X9108-IFMIFM-25G25GFIFI-6536+6536+X9108X9108-IFMIFM-25G/100G25G/100GOr Or FIFI-64006400-Series+Series+X9108X9108-IFMIFM-25G25Gx210c configurationVIC 15231VIC 15231VIC 15231VIC 15231VIC 15420VIC 15420VIC 15420+VIC 15422VIC 15420+VIC 15422Throughput per node200G(100G per IFM)
40、200G(100G per IFM)100G(50G per IFM)100G(50G per IFM)100G(50G per IFM)100G(50G per IFM)200G(100G per IFM)200G(100G per IFM)vNICs needed for max BW2 22 22 24 4KR connectivity per IFM1x 100GKR1x 100GKR2x 25GKR2x 25GKR2x 25GKR2x 25GKR4x 25GKR4x 25GKRSingle vNIC throughput on VIC100G100G50G(2x25G KR)50G(
41、2x25G KR)50G(2x25G KR)50G(2x25G KR)50G 50G(2x25G KR)(2x25G KR)50G 50G(2x25G KR)(2x25G KR)Max Single flow BW per vNIC100G100G25G25G25G25G25G25G25G25GSingle vHBAthroughput on VIC100G100G50G50G50G50G50G50G50G50GA AB BC CD DBRKCOM-229519 2023 Cisco and/or its affiliates.All rights reserved.Cisco Public#
42、CiscoLiveThroughput per UCS x210c compute nodex210c Compute x210c Compute NodeNodeFIFI-6536+6536+X9108X9108-IFMIFM-100G100GFIFI-6536/64006536/6400-Series+Series+X9108X9108-IFMIFM-25G25GFIFI-6536+6536+X9108X9108-IFMIFM-25G/100G25G/100GOr Or FIFI-64006400-Series+Series+X9108X9108-IFMIFM-25G25GFIFI-653
43、6+6536+X9108X9108-IFMIFM-25G/100G25G/100GOr Or FIFI-64006400-Series+Series+X9108X9108-IFMIFM-25G25Gx210c configurationVIC 15231VIC 15231VIC 15231VIC 15231VIC 15420VIC 15420VIC 15420+VIC 15422VIC 15420+VIC 15422Throughput per node200G(100G per IFM)200G(100G per IFM)100G(50G per IFM)100G(50G per IFM)1
44、00G(50G per IFM)100G(50G per IFM)200G(100G per IFM)200G(100G per IFM)vNICs needed for max BW2 22 22 24 4KR connectivity per IFM1x 100GKR1x 100GKR2x 25GKR2x 25GKR2x 25GKR2x 25GKR4x 25GKR4x 25GKRSingle vNIC throughput on VIC100G100G50G(2x25G KR)50G(2x25G KR)50G(2x25G KR)50G(2x25G KR)50G 50G(2x25G KR)(
45、2x25G KR)50G 50G(2x25G KR)(2x25G KR)Max Single flow BW per vNIC100G100G25G25G25G25G25G25G25G25GSingle vHBAthroughput on VIC100G100G50G50G50G50G50G50G50G50GA AB BC CD DBRKCOM-229520 2023 Cisco and/or its affiliates.All rights reserved.Cisco Public#CiscoLiveThroughput per UCS x210c compute nodex210c C
46、ompute x210c Compute NodeNodeFIFI-6536+6536+X9108X9108-IFMIFM-100G100GFIFI-6536/64006536/6400-Series+Series+X9108X9108-IFMIFM-25G25GFIFI-6536+6536+X9108X9108-IFMIFM-25G/100G25G/100GOr Or FIFI-64006400-Series+Series+X9108X9108-IFMIFM-25G25GFIFI-6536+6536+X9108X9108-IFMIFM-25G/100G25G/100GOr Or FIFI-6
47、4006400-Series+Series+X9108X9108-IFMIFM-25G25Gx210c configurationVIC 15231VIC 15231VIC 15231VIC 15231VIC 15420VIC 15420VIC 15420+VIC 15422VIC 15420+VIC 15422Throughput per node200G(100G per IFM)200G(100G per IFM)100G(50G per IFM)100G(50G per IFM)100G(50G per IFM)100G(50G per IFM)200G(100G per IFM)20
48、0G(100G per IFM)vNICs needed for max BW2 22 22 24 4KR connectivity per IFM1x 100GKR1x 100GKR2x 25GKR2x 25GKR2x 25GKR2x 25GKR4x 25GKR4x 25GKRSingle vNIC throughput on VIC100G100G50G(2x25G KR)50G(2x25G KR)50G(2x25G KR)50G(2x25G KR)50G 50G(2x25G KR)(2x25G KR)50G 50G(2x25G KR)(2x25G KR)Max Single flow B
49、W per vNIC100G100G25G25G25G25G25G25G25G25GSingle vHBAthroughput on VIC100G100G50G50G50G50G50G50G50G50GA AB BC CD DBRKCOM-229521 2023 Cisco and/or its affiliates.All rights reserved.Cisco Public#CiscoLiveThroughput per UCS x210c compute nodex210c Compute x210c Compute NodeNodeFIFI-6536+6536+X9108X910
50、8-IFMIFM-100G100GFIFI-6536/64006536/6400-Series+Series+X9108X9108-IFMIFM-25G25GFIFI-6536+6536+X9108X9108-IFMIFM-25G/100G25G/100GOr Or FIFI-64006400-Series+Series+X9108X9108-IFMIFM-25G25GFIFI-6536+6536+X9108X9108-IFMIFM-25G/100G25G/100GOr Or FIFI-64006400-Series+Series+X9108X9108-IFMIFM-25G25Gx210c c
51、onfigurationVIC 15231VIC 15231VIC 15231VIC 15231VIC 15420VIC 15420VIC 15420+VIC 15422VIC 15420+VIC 15422Throughput per node200G(100G per IFM)200G(100G per IFM)100G(50G per IFM)100G(50G per IFM)100G(50G per IFM)100G(50G per IFM)200G(100G per IFM)200G(100G per IFM)vNICs needed for max BW2 22 22 24 4KR
52、 connectivity per IFM1x 100GKR1x 100GKR2x 25GKR2x 25GKR2x 25GKR2x 25GKR4x 25GKR4x 25GKRSingle vNIC throughput on VIC100G100G50G(2x25G KR)50G(2x25G KR)50G(2x25G KR)50G(2x25G KR)50G 50G(2x25G KR)(2x25G KR)50G 50G(2x25G KR)(2x25G KR)Max Single flow BW per vNIC100G100G25G25G25G25G25G25G25G25GSingle vHBA
53、throughput on VIC100G100G50G50G50G50G50G50G50G50GA AB BC CD DBRKCOM-229522 2023 Cisco and/or its affiliates.All rights reserved.Cisco Public#CiscoLivex210c,IFM 9108-100G and VIC 15231EmptyEmptyx210cx210cMezzanine SlotMezzanine SlotmLOMmLOM SlotSlotIFM IFM 91089108-100G100GSide ASide ACPU 2CPU 2CPU 1
54、CPU 1100GBASE-KR4100GBASE-KR4100GBASE-KR4100GBASE-KR4UPI8x100Gb8x100Gbx16 PCIe4.0100G 100G vNICsvNICs100G 100G vHBAsvHBAsPort Port Group AGroup APort Port Group BGroup BIFMIFM91089108-100G100GSide BSide BVIC 15231 AdapterVIC 15231 AdapterFIFIIFMIFMmLOMmLOMMezzMezz65369108-100G15231NoneA ABRKCOM-2295
55、23 2023 Cisco and/or its affiliates.All rights reserved.Cisco Public#CiscoLivex210c,IFM 9108-25G and VIC 15231IFM IFM 91089108-25G25GSide BSide BEmptyEmptyx210cx210cMezzanine SlotMezzanine SlotmLOMmLOM SlotSlotIFM IFM 91089108-25G25GSide ASide ACPU 2CPU 2CPU 1CPU 12x25Gb25G KR25G KR25G KR25G KR2x25G
56、bUPI8x25Gb8x25GbPort Port Group BGroup Bx16 PCIe4.050G 50G vNICsvNICs50G 50G vHBAsvHBAsPort Port Group AGroup AVIC 15231 AdapterVIC 15231 AdapterFIFIIFMIFMmLOMmLOMMezzMezz6536/6400 Series9108-25G15231NoneB BBRKCOM-229524 2023 Cisco and/or its affiliates.All rights reserved.Cisco Public#CiscoLivex210
57、c,IFM 9108-25G/100G and VIC 15420IFM IFM 91089108-100G100GSide BSide BEmptyEmptyx210cx210cMezzanine SlotMezzanine SlotmLOMmLOM SlotSlotIFM IFM 91089108-100G100GSide ASide ACPU 2CPU 2CPU 1CPU 12x25Gb25G KR25G KR25G KR25G KR2x25GbUPI8x 25Gb/100Gb8x 25G/100GbPort Port Group BGroup Bx16 PCIe4.050G 50G v
58、NICsvNICs50G 50G vHBAsvHBAsPort Port Group AGroup AVIC 15420 AdapterVIC 15420 AdapterFIFIIFMIFMmLOMmLOMMezzMezz6536 Series9108-25G/100G15420None6400 Series9108-25G15420NoneC CBRKCOM-229525 2023 Cisco and/or its affiliates.All rights reserved.Cisco Public#CiscoLivex210c,IFM 9108-25G/100G,VIC 15420 an
59、d VIC 15422IFM IFM 91089108-25G/10025G/100G GSide BSide BVIC 15422 AdapterVIC 15422 AdapterVIC 15420 AdapterVIC 15420 Adapterx210cx210cMezzanine SlotMezzanine SlotmLOMmLOM SlotSlotIFM IFM 91089108-25G/10025G/100G GSide ASide A25G KR25G KR25G KR25G KR2x25Gb2x25Gb2x25Gb2x25Gb8 x 25G/100G8 x 25G/100G25
60、G KR25G KR25G KR25G KR25G KRCPU 1CPU 1UPIx16 PCIe4.0CPU 2CPU 2x16 PCIe4.0VIC 15000Bridge50G vNIC50G vNIC&vHBAvHBA50G vNIC 50G vNIC&vHBAvHBAPort Port Group AGroup APort Port Group BGroup BPort Port Group AGroup APort Port Group BGroup BD DFIFIIFMIFMmLOMmLOMMezzMezz6536 9108-25G/100G15420154226400 Ser
61、ies9108-25G1542015422BRKCOM-229526 2023 Cisco and/or its affiliates.All rights reserved.Cisco Public#CiscoLiveB-Series:VIC 15411,IOM-2408,FI-64545108 have 32x 10G backplane ethernet connections between an IOM and 8x B200 blade-servers,thus each server has 4x 10G backplane traces per IOM.Various inte
62、rface-types on IOMNetwork Interface(NIFNetwork Interface(NIF),interface on IOM which connect to Fabric-Interconnect(redred-linkslinks).10/25/40Gbps depending on the IOM.Host Interface(HIF)ost Interface(HIF),are backplane ports on IOM that connect to blade-server(blueblue-linkslinks).10G-KR connectio
63、ns per server can mux to 40G-KR4 in IOM-2304 and 2408.40G-KR4 with 15411+PE.Half-width blade-server(B200-M6)can have the following combinations with VIC 15411MLOM(VIC-15411)MLOM(VIC-15411)+port-expander(PE)Depending on the Fabric-Interconnect,IOM,VIC and blade-server,vNIC on a VIC adapter will see 1
64、0G,20G,or 40G total bandwidth and single traffic flow max of 10G,25G,or 40G.UCS 5108 Blade Server Chassis with IOM 24088x10G backplane Ethernet connections to each blade serverBlade 1UCS FI 6454 25GbE links25GbE linksL1/L2 linksBRKCOM-229527 2023 Cisco and/or its affiliates.All rights reserved.Cisco
65、 PublicVIC 15411VIC 15420:MLOM VIC for B200-M6Supported with FI-6536,FI-6400 and FI-6300 SeriesSupported with IOM-2200 Series,IOM-2304,IOM-2408Various connectivity optionsVIC 15411:VIC 15411:2 ports of 10G-KR enabled per IOM-2204/2208,IOM-2304 or IOM-2408 VIC 15411+PortVIC 15411+Port-Expander:Expand
66、er:1 port of 40G-KR enabled per IOM-2304 or IOM-240820G or 40G uplink on VIC depending on one of these optionsvNIC and vHBA bound to the uplink are 20G or 40GvNIC/vHBA can do single-flow of 10Gbps,25Gbps,or 40Gbps depending on the IOM and VIC combinationvNIC/vHBA defined by Intersight or UCSM polici
67、esNo default vNICs/vHBAs1Programmable virtual Interfaces vNICs or vHBAs are bound to a physical port and vNIC/vHBA speed equals to speed of physical portvNIC with Fabric FailoverPrimary path&Secondary path2x 40GBASE-KRUnified network fabric40G to each IOM with port-expander23440-GbpsBW to fabric A40
68、-GbpsBE to fabric BCisco UCSCisco UCSVIC 15411+VIC 15411+Port ExpanderPort Expander512 programmablevirtual interfacesEthernet NICsFibre Channel HBAsMezzanine LOM card form factor051240Gbase-KRWith port-expanderKR KRKR KRPort ExpanderBRKCOM-229528 2023 Cisco and/or its affiliates.All rights reserved.
69、Cisco Public#CiscoLiveFabric Interconnect+Fabric Interconnect+I/O ModuleI/O ModuleFIFI-6536/6536/64006400-series series+IOM+IOM-24082408FIFI-6536/6536/6300+6300+IOMIOM-23042304FIFI-6300/6300/6400 6400+IOM+IOM-22082208FIFI-6300/6300/6400 6400+IOM+IOM-22042204ServerServerB200M6B200M6B200M6B200M6B200M6
70、B200M6B200M6B200M6154111541140G40G40G40G40G40G20G20G15411+PE15411+PE80G80G80G80GN/AN/AN/AN/AB200 M6 supported combinations*Recommended combination of FI/IOM/VICNo support for FI 6248 or 6296 with B200-M6VIC 15411 support in IMM and UCSM is available from 4.2(3)releaseFI-6536 is supported in IMM and
71、UCSM from 4.2(3)release.BRKCOM-229529 2023 Cisco and/or its affiliates.All rights reserved.Cisco Public#CiscoLiveB200M6,IOM 2304,&VIC 15411IOM IOM 23042304Side BSide BEmptyEmptyVIC 15411 AdapterVIC 15411 AdapterB200M5B200M5Mezzanine SlotMezzanine SlotmLOM SlotmLOM SlotIOM IOM 23042304Side ASide ACPU
72、 2CPU 2CPU 1CPU 12x10Gb10G KR10G KR10G KR10G KR2x10GbUPIPort Port Group AGroup APort Port Group BGroup B4x40 Gb4x40 Gbx16 PCIe4.020G vNIC20G vNIC&vHBAvHBABRKCOM-229530 2023 Cisco and/or its affiliates.All rights reserved.Cisco Public#CiscoLiveB200M6,IOM 2304,&VIC 15411 with Port ExpanderIOM IOM 2304
73、2304Side BSide BPort ExpanderPort Expander15411 Adapter15411 AdapterB200M5B200M5Mezzanine SlotMezzanine SlotmLOM SlotmLOM SlotIOM IOM 23042304Side ASide ACPU 2CPU 2CPU 1CPU 110G KR10G KR10G KR10G KR10G KR10G KR10G KR10G KRUPI10G10G10G10G 4x40 Gb4x40 Gb40GBASE-KR440GBASE-KR4Port Port Group BGroup BPo
74、rt Port Group AGroup Ax16 PCIe4.0 x16 PCIe4.040G vNIC40G vNIC&vHBAvHBABRKCOM-229531 2023 Cisco and/or its affiliates.All rights reserved.Cisco Public#CiscoLiveB200M6,IOM 2408,&VIC 15411IOM IOM 24082408Side BSide BEmptyEmptyVIC 15411 AdapterVIC 15411 AdapterB200M5B200M5Mezzanine SlotMezzanine SlotmLO
75、M SlotmLOM SlotIOM IOM 24082408Side ASide ACPU 2CPU 2CPU 1CPU 12x10Gb10G KR10G KR10G KR10G KR2x10GbUPI8x25Gb8x25GbPort Port Group AGroup APort Port Group BGroup Bx16 PCIe4.0 x16 PCIe4.020G vNIC20G vNIC&vHBAvHBABRKCOM-229532 2023 Cisco and/or its affiliates.All rights reserved.Cisco Public#CiscoLiveB
76、200M6,IOM 2408,&VIC 15411 with Port Expander(PE)IOM IOM 24082408Side BSide BPort ExpanderPort Expander15411 Adapter15411 AdapterB200M6B200M6Mezzanine SlotMezzanine SlotmLOM SlotmLOM SlotIOM IOM 24082408Side ASide ACPU 2CPU 2CPU 1CPU 110G KR10G KR10G KR10G KR10G KR10G KR10G KR10G KRUPI10G10G10G10G 8x
77、25Gb8x25Gb40GBASE-KR440GBASE-KR4Port Port Group BGroup BPort Port Group AGroup Ax16 PCIe4.0 x16 PCIe4.040G vNIC40G vNIC&vHBAvHBABRKCOM-229533Packet flow 2023 Cisco and/or its affiliates.All rights reserved.Cisco Public#CiscoLivePhysical CableVirtual Cable(VN-Tag)Abstracting the Logical ArchitectureD
78、ynamic,Rapid ProvisioningState abstractionLocation IndependenceBlade or RackvNIC1vEth1FabricFabricvHBA1vFC1(Server)LogicalAdapterFabric Interconnect10G/25G/10G/25G/40G/100G40G/100GA AEth Eth 1/11/1IOM/IFM AFabricFabricB200 or x210c BladeCablePhysicalKR LanevEth1IOM/IFM AFabricFabricvFC1Service Profi
79、le(Server)vHBA1vNIC110G/25G/10G/25G/40G/100G40G/100GA ABRKCOM-229535 2023 Cisco and/or its affiliates.All rights reserved.Cisco Public#CiscoLiveCisco UCS:Infrastructure Virtualization36Individual Ethernet Traffic(Mgmt,vmotion,Data)DCB EthernetIndividual Storage Traffic(iSCSI,NFS,FC,NVMEoF)Cable Virt
80、ualization(VNTag)Switchport Virtualization(vEth,vFC)vEth1vFC1vEth2vFC2Fabric InterconnectAdapter Virtualization(NIV)AdapterEth 1/1Eth 1/2PCIePCIeService ProfileService Profile#Adapters#AdaptersIdentity(MAC/WWN)Identity(MAC/WWN)FirmwareFirmwareSettingsSettingsServer AbstractionCPUMEMI/OBlade or RackB
81、RKCOM-2295 2023 Cisco and/or its affiliates.All rights reserved.Cisco Public#CiscoLive5thGen Fabric FC/IP Packet Flow with X-SeriesSAN(FC or IP)100G100GFI FI-653665364.FC/4.FC/FCoEFCoE/IP packets/IP packets are forwarded to SAN or LAN by fabric interconnects FC/FC/FCoEFCoE/Ethernet/Ethernet uplink10
82、0G100GIFMIFM-100G100GLAN32G FC32G FC32G FC32G FCNIF(1NIF(1-8 x 100G)8 x 100G)NIF(1NIF(1-8 x 100G)8 x 100G)VIC 15231VIC 15231X210c M6/M7 X210c M6/M7 w/VIC 15231w/VIC 15231vHBAsvNICsx16 PCIe Gen4HIF(100 G)HIF(100 G)HIF(100G)HIF(100G)ApplicationPayloadTCPIPEthernetVNTAG1 1.Ethernet or FCoE frame from h
83、ost are appended with a unique VNTagspecific to each vNIC/vHBA by VIC.Ether typeEther typed d p pDestination VIFDestination VIFL L R RverSource VIFSource VIFVNTag3 3.East-West traffic between servers Is locally switched on the Fabric Interconnect.2 2.PortPort-channel traffic distributionchannel traf
84、fic distributionVNTagged IP packets from vNIC IP packets from vNIC are hashed across the IOM-VIC and FI-IOM port-channel based on 7-tuple(mac,vlan,IP,UDP/TCP)VNTagged FCoEFCoE frames from frames from vHBAvHBA are round-robin distributed based on S_ID,D_ID&OXID ensuring each I/O follows same physical
85、 path SrcSrc VIF VIF is inserted for traffic from hostor VIC to FI and this identifies each vNIC/vHBA at FI/IOMDstDst VIF VIF is inserted for traffic from FI to host/VIC to uniquely identify each vNIC/vHBAOther VIC Option:Other VIC Option:4x 25G with4x 25G withVIC 15420+15422VIC 15420+15422VNTagVNTa
86、g packet Detailspacket DetailsBRKCOM-229537 2023 Cisco and/or its affiliates.All rights reserved.Cisco Public#CiscoLive5thGen Fabric FC/IP Packet Flow with B200SAN(FC or IP)100G100GFI FI-653665364.FC/4.FC/FCoEFCoE/IP packets/IP packets are forwarded to SAN or LAN by fabric interconnects FC/FC/FCoEFC
87、oE/Ethernet/Ethernet uplink100G100GIOM 2408IOM 2408LAN32G FC32G FC32G FC32G FCNIF(1NIF(1-8 x 25G)8 x 25G)NIF(1NIF(1-8 x 25G)8 x 25G)VIC 15411VIC 15411B200M6 B200M6 w/VIC 15411w/VIC 15411x16 PCIe Gen4HIF(2x 10G HIF(2x 10G or 1x 40G)or 1x 40G)HIF(2x 10G HIF(2x 10G or 1x40G)or 1x40G)ApplicationPayloadT
88、CPIPEthernetVNTAG1 1.Ethernet or FCoE frame from host are appended with a unique VNTagspecific to each vNIC/vHBA by VIC.3 3.East-West traffic between servers Is locally switched on the Fabric Interconnect.2 2.PortPort-channel traffic distributionchannel traffic distributionVNTagged IP packets from v
89、NIC IP packets from vNIC are hashed across the IOM-VIC and FI-IOM port-channel based on 7-tuple(mac,vlan,IP,UDP/TCP)VNTagged FCoEFCoE frames from frames from vHBAvHBA are round-robin distributed based on S_ID,D_ID&OXID ensuring each I/O follows same physical path SrcSrc VIF VIF is inserted for traff
90、ic from hostor VIC to FI and this identifies each vNIC/vHBA at FI/IOMDstDst VIF VIF is inserted for traffic from FI to host/VIC to uniquely identify each vNIC/vHBAOptional Optional PortPort-ExpanderExpanderVNTagVNTag packet Detailspacket DetailsvHBAsvNICsB/W perB/W per-fabric w/VIC fabric w/VIC 1541
91、1:2x 10G15411:2x 10G15411+PE:40G15411+PE:40GBRKCOM-229538Ether typeEther typed d p pDestination VIFDestination VIFL L R RverSource VIFSource VIFVNTagRack Server Connectivity 2023 Cisco and/or its affiliates.All rights reserved.Cisco PublicVIC 15428,15425(FI managed)15428 MLOM VIC for M6/M7 C-series
92、with FI 6300/6400/653615425 PCIe VIC for M6/M7 C-series with FI 6300/6400/6536Four physical ports which can run at 10G/25GPort speed is determined by inserted transceiverPhysical ports(P1,P2)are statically HW port-channeled as logical uplink 1 and ports(P3,P4)are bundled as logical uplink 2Connectiv
93、ity to Fabric InterconnectsSupports one link or two links to each FI.The links connected to each FI will be in a port-channelVIC HW port-channel cannot be disabled when FI managedvNIC speed determined by the transceiver type and the number of active linksvNIC/vHBA speed of 10G,20G,25G or 50G per fab
94、ric.No FEC or auto-negotiation configuration required,link settings are auto-determined.vNIC/vHBA defined by LAN/SAN connectivity policyNo default vNICs/vHBAsSFP56SFP56SFP56SFP5612345Programmable virtual Interfaces vNICvNICs,vHBAvHBAsBound to a logical uplinkP1 P2 P3 P4vNIC with Fabric FailovervNIC
95、with Fabric FailoverPrimary Path&Secondary PathVIC HW portVIC HW port-channelchannel(P1,P2)bundled as a logical uplink1Logical uplink2Speed of uplink decided by SFP+,SFP28 or SFP56 in P1,P2,P3,P4Physical ports-P1,P2,P3,P4 BRKCOM-229540 2023 Cisco and/or its affiliates.All rights reserved.Cisco Publi
96、c#CiscoLiveVIC 15428,15425 connectivity to FI 6536Connectivity with Fabric Interconnect support only Switch Independent modes across Linux,Connectivity with Fabric Interconnect support only Switch Independent modes across Linux,ESXiESXi&Windows OS.&Windows OS.Ports 1 2 3 4M6 C-series with VIC 15428F
97、I-A 6536FI-B 6536VIC ports 1&2 to FI-A and ports 3&4 to FI-B -Supported4x 25GbreakoutA APorts 1 2 3 4VIC port 1 to FI-A and port 3 or 4 to FI-B -SupportedFI-A 6536FI-B 6536M6 C-series with VIC 15428B BFI-A 6536FI-A 6536FI-B 6536FI-B 6536M6 C-series with VIC 15428M6 C-series with VIC 15428Ports 1 2 3
98、 4Ports 1 2 3 4VIC ports 2 to FI-A and ports 4 to FI-B -SupportedVIC ports 1 or 2 to FI-A and ports 3 or 4 to FI-B-SupportedNot Supported1.VIC port 1 to FI-A and port 2 to FI-B2.VIC port 3 to FI-A and port 4 to FI-BC CD DBRKCOM-229541 2023 Cisco and/or its affiliates.All rights reserved.Cisco Public
99、VIC 15428,15235(Standalone)Four ports which can run at 10G/25G/50G speed Speed is determined by inserted transceivers.By default,ports(P1,P2)and(P3,P4)are in an HW port-channel2x 2x vNICsvNICs and 2x 2x vHBAsvHBAs by default If HW port-channel is disabled from CIMC,enables 4 uplink ports correspondi
100、ng to each physical port4x 4x vNICsvNICs and 4x 4x vHBAsvHBAs by default Auto-Negotiation Mode(enabled by Admin link-training config)Enabled for 50G Copper Transceivers by defaultDisabled for 25G Copper Transceivers by defaultAuto Negotiation can be enabled/disabled using CIMC CLI or WebUIin standal
101、one mode.Admin Link Training Configuration per portAutoAuto,OnOn,OffOff(Auto means VIC firmware decides the correct mode)Auto-fec supported with 25G-CUx cables when link-training is on.Link training ensures greater link reliability FEC(Forward Error Correction)Configuration per portFEC Configuration
102、 for 25G(FecFec-offoff,cl74cl74,cl91cl91-cons16cons16,cl91cl91,cl108cl108)FEC Configuration for 50G(cl91cl91)FEC Configuration ignored for 10G speed Physical NIC mode supported to disable priority-taggingPhysical ports-P1,P2,P3,P4 SFP56SFP56SFP56SFP561234P1 P2 P3 P4VIC HW portVIC HW port-channelchan
103、nel(P1,P2)bundled as a logical uplink1Logical uplink2(P3,P4)Speed of uplink based on SFP+,SFP28 or SFP56 in P1,P2,P3,P4Programmable virtual Interfaces vNICs,vHBAs.Bound to a logical uplinkVIC with HW portVIC with HW port-channel enabledchannel enabledSFP56SFP56SFP56SFP56123P1 P2 P3 P44VIC with HW po
104、rtVIC with HW port-channel channel disabled from CIMCdisabled from CIMC5BRKCOM-229542 2023 Cisco and/or its affiliates.All rights reserved.Cisco Public#CiscoLiveDebugging connectivity issues due to FEC at 25GBRKCOM-2295431Match FEC on Switch&VICSome older N9K have default at CL74 and defaults are di
105、fferent across switches.FEC on VIC at 25G is CL91 by defaultAuto-FEC is only for copper cablesAuto-FEC is disabled in VICEx:on how to set CL91 on N9KCables/transceivers also have minimum FEC requirementEx:SFP-25G-SR-S/CSR-S/LR-S have a minimum FEC of CL9132Check FEC config in CIMCFEC should match on
106、 both ends of the link 2023 Cisco and/or its affiliates.All rights reserved.Cisco Public#CiscoLiveVIC 15428,15425 connectivity to Nexus switch in standalone modeThese connectivity options are applicable at 10G/25GThese connectivity options are applicable at 10G/25G10/25/50G Ports 1 2 3 410/25/50G Po
107、rts 1 2 3 4C220M6 with VIC 15428Switch1Switch2VIC ports 1&2 to SW1 and ports 3&4 to SW2 -SupportedWith Default VIC port-channel(PO)enabledenabledRequires PO config on switch with switch dependent bondingCannot support MCT/VPC at ToR switch and OS IP-hash load-balancingMAC-hash or port-ID load-balanc
108、ing in OS should be used to avoid mac-move on ToRPo1Po2A A10/25/50G Ports 1 2 3 4C220M6 with VIC 15428Switch1Switch2Po1Po1VIC ports 1 or 2 to SW1 and ports 3 or 4 to SW2-SupportedWith VIC port-channel enabledenabled and using one link in(1,2)&(3,4)port pairSupports switch dependent&switch-independen
109、t OS teaming/bondingSupports MCT/VPC at ToR switch and all OS teaming load-balancing optionsB B10/25/50G Ports 1 2 3 4C220M6 with VIC 15428Switch1Switch2Po1Po1VIC ports 1,2,3,4 to SW1&SW2 -SupportedWith VIC port-channel disableddisabledSupports MCT/VPC at ToR switch and all OS teaming/load-balancing
110、 optionsC CVIC ports(1,2)to SW1&(3,4)to SW2 -Not SupportedWith VIC port-channel enabledenabled10/25/50G Ports 1 2 3 4C220M6 with VIC 15428Switch1Switch2Po1Po1D DBRKCOM-229544 2023 Cisco and/or its affiliates.All rights reserved.Cisco PublicVIC 15238,1523515238 MLOM VIC for M6 C-series with FI 6300/6
111、400/653615235 PCIe VIC for M6 C-series with FI 6300/6400/6536Two physical ports which can run at 40G/100G/200GPort speed is determined by inserted transceiverConnectivity to Fabric Interconnects or N9K switchvNIC speed determined by the transceiver typevNIC/vHBA speed of 40G or 100G per fabric in UC
112、SM/IMM modevNIC/vHBA speed of 40G,100G or 200G per fabric in standalone modeNo FEC or auto-negotiation configuration required in UCSM/IMM mode,link settings are auto-determined.FEC Configuration for 100G in standalone mode(cl91cl91,cl108cl108)In UCSM/IMM mode vNIC/vHBA defined by LAN/SAN connectivit
113、y policyNo default vNICs/vHBAsDefault of 2x vNICs and 2x vHBAs in standalone modeQSFP561Programmable virtual Interfaces vNICs or vHBAs are bound to a physical port and vNIC/vHBA speed equals to speed of physical portP1vNIC with Fabric FailoverPrimary path&Secondary pathPhysical ports-P1,P2Supported
114、speeds:40/100/200GSpeed determined by inserted QSFP transceiver/cableQSFP56P22 3 440/100/200-GbpsBW to fabric A40/100/200-GbpsBW to fabric BCisco UCSCisco UCSVIC 15238,VIC 15238,1523515235512 programmablevirtual interfacesEthernet NICsFibre Channel HBAsHalf-heightPCIe form factor0512BRKCOM-229545 20
115、23 Cisco and/or its affiliates.All rights reserved.Cisco Public#CiscoLiveVIC 15238,15235 connectivity to Fabric Interconnects or Nexus switchFI-A:6300/6536VIC ports 1 to FI-A and ports 2 to FI-B -SupportedSupportedConnectivity to fabric-interconnect support only switch independent teamingCannot supp
116、ort MCT/VPC at FI and OS IP-hash kind of load-balancingMAC-hash or port-ID load-balancing in OS should be used to avoid mac-move on FIThere is no need to configure FEC40/100GPorts 1 2C220M6 with VIC 1523840/100G40/100GFI-B:6300/6536UCSM/IMM Managed40/100G Ports 1 2C220M6 with VIC 1523840/100G40/100G
117、VIC ports 1 to SW1 and ports 2 to SW2-SupportedSupportedSupports switch dependent&switch-independent OS teaming/bondingSupports MCT/VPC at ToR switch and all OS teaming load-balancing optionsDefault FEC of CL-91 on VIC works for all cable types,so ensure switch end is also CL-91Nexus Switch1Nexus Sw
118、itch2IMC or Standalone modeBRKCOM-229546VIC Features 2023 Cisco and/or its affiliates.All rights reserved.Cisco PublicReceive Side Scaling(RSS)VIC HW feature supported for ESXi,Linux,and WindowsRSS provides better server CPU utilization,higher throughput and handles bursty trafficAchieved by Rx traf
119、fic distribution across multiple Rx-queues/cpu-cores based on L2/L3/L4 packet header fieldsVIC 15000 series support 16K Tx and Rx ring size,while previous generations supported up to 4K Tx and Rx ring sizeParameterParameterESXiESXiLinuxLinuxWindowsWindowsTX queueTX queue111TX ring sizeTX ring size4K
120、/16K4K/16K4K/16KRX queueRX queue888RX ring sizeRX ring size4K/16K4K/16K4K/16KCQCQ999InterruptInterrupt1111 or 10512Interrupt Interrupt CalculationCalculationCQ+2“CQ+2”or“Rx-Queue+2”512 or“2x CPU-cores+4”RSSRSSEnabledEnabledEnabledAdapter policy for performance with RSSAdapter policy for performance
121、with RSSBRKCOM-229548CPU0RX Q-0RX Q-1RX Q-2RX Q-xCPU1CPU2CPUxVICvNIC0Server 2023 Cisco and/or its affiliates.All rights reserved.Cisco Public#CiscoLivevNIC queue-drop debugging BRKCOM-229549Queue statistics from Linux hostQueue statistics from Linux host23vNIC statistics from VIC vNIC statistics fro
122、m VIC 1Action to take Action to take if rx_no_bufincrementsa.Increase Rx Queuesb.Increase Rx Queue ring-size 2023 Cisco and/or its affiliates.All rights reserved.Cisco PublicNetQueueNetQueue is an integrated hardware and software solution from Cisco and VMwareNot a Hypvervisor bypass technologyNetQu
123、eue achieves higher throughput and performance by having dedicated TX/RX queue per VML2 sorting(L2 vlan/mac classifier)is done by VIC HWNetQueue dedicates a TX/RX queue per VM while VIC RSS enables multiple RX queues across multiple VMs.NetQueue on the vNIC is enabled through the VMQ connection poli
124、cy.Interrupt for NetQueue is calculated as“2 x VMQ+2.”BRKCOM-229550RSS w/HypervisorVM0RX Q-0RX Q-1RX Q-2RX Q-xVM1VM2VMxvNIC0VICHypervisorNetQueueVM0VM1VM2VMxvNIC0VICHypervisor1 TxRx Queue1 TxRx Queue1 TxRx Queue1 TxRx Qu 2023 Cisco and/or its affiliates.All rights reserved.Cisco PublicVMMQVirtual Ma
125、chine Multi-Queue,allows allocating multiple RX queues per vPort in a Windows Hyper-V host.Thus,providing higher throughput and distributes traffic load across multiple CPU cores.Supported in IMM,UCSM,and CIMCVMMQ is recommended over VMQ,or RSS for Windows Hyper-V.Both VMQ and RSS are supported by V
126、IC 15000 for Windows.Use the default adapter policy values in Intersight of“Win-HPN”and“MQ”to enable VMMQ.And the policy definition is good for 64 vPorts.Sample example for IMM is attachedBRKCOM-229551vNIC0VICHypervisorMultiple Rx queues per vPortVM-01vPort12NvCPUsVM-11vPort22NvCPUsvPort NVM-N12NvCP
127、UsRx queuesDefault vPort 2023 Cisco and/or its affiliates.All rights reserved.Cisco PublicRSSv2RSSv2 is an enhancement to RSS to reduce latency in updating the indirection tables.RSSv2 can dynamically spread receive queues over multiple processors much more responsively than RSSv1RSSv2 is a windows
128、only driver featureAvailable from 4.3(2)releaseAvailable for bare-metal windows server with RSS and for Hyper-V host with VMMQ(aka dynamic VMMQ).Supported only with VIC 15000 seriesNo user configuration needed,just an update to the latest 4.3(2)release and drivers.BRKCOM-229552CPU0RX Q-0RX Q-1RX Q-2
129、RX Q-xCPU1CPU2CPUxVICvNIC0ServerIndirection TableCPU12Windows OS/HyperV dynamically updates CPU1 w/CPU-12 due to CPU overload BareMetal ex:but is applicable for VMMQ vPort as well 2023 Cisco and/or its affiliates.All rights reserved.Cisco PublicSR-IOV53BRKCOM-2295vNIC0VICHypervisorPF w/TxRx QsVF w/T
130、xRxVM0VM0VF w/TxRxVM1VM1VF w/TxRxVMxVMxVF to VF traffic within VICNorthbound VF trafficSingle Root I/O Virtualization is a PCIe specification that enables Hypervisor bypass.Enables a PCIe physical function(PF)and one or more PCIe virtal function(VF)Provides high-throughput but w/Hypervisor limitatio
131、ns15000 series VIC will support SR-IOV with 4.3(2)releaseSupported in UCSM and Cisco IMCFirst support for ESXi and the Linux guest-os will require latest async driverLinux KVM&Hyper-V support for SR-IOV will be post 4.3(2)Intersight support will be post 4.3(2)UCSM configuration requires the followin
132、gSR-IOV connectivity policy under LAN tabSRIOV-HPN adapter-policy under vNIC 2023 Cisco and/or its affiliates.All rights reserved.Cisco Public#CiscoLiveSR-IOV UCSM ConfigurationBRKCOM-22955421 2023 Cisco and/or its affiliates.All rights reserved.Cisco PublicVxLAN and GENEVE OffloadVIC provides state
133、less offload for VxLAN and GENEVERSS on inner packet TSO for IPv4/v6 packetsTX/RX checksum offload for IPv4/v6 inner/outer packetEnables better throughput,lowers CPU overhead for VxLAN/Geneve encap and spread packet processing across CPU coresVxLAN offload supports RSS:used with ESXi&Hyper-VSupports
134、 GENEVE offload with two Vmware NSX-T modesStandard Mode Standard Mode supports RSSEnhanced Datapath Mode Enhanced Datapath Mode require separate nenic-ensdriver&only 1-Tx,1-Rx queue supported.In future can support NetQueue.Geneve FrameOuter FrameInner FrameOuter MACOuter IP HeaderGeneve HeaderInner
135、 MACInner IP/ProtocolInner PayloadFCSOuter UDPBRKCOM-229555UCS VIC QinQ Tunneling Feature,L2 segmentationvSwitch/VDSVLAN11VLAN1000VM1VM10vSwitch/VDSServer NVLAN11VLAN2000VM11VM20.UCS ServersUCS ServersFI VLANs:Benefits1.Simpler policy driven L2 Multi-tenancy2.Tunneling without dependency on Hypervis
136、or3.Cost and HW performance benefit compared to Hypervisor overlays4.Scale vlans to N*N within UCS fabricvNIC1vNIC0QinQ Tag:6vNIC1vSwitch/VDSVLAN11VLAN1000VM1VM10Server 1VLAN11VLAN2000VM11VM20.vSwitch/VDSvNIC0MAC AMAC AvSwitch/VDSVLAN11VLAN1000VM1VM10Server 2VLAN11VLAN2000VM11VM20.vSwitch/VDSvNIC0vN
137、IC1MAC BMAC B11-2000(without QinQ)5,6(with QinQ)vNIC1QinQ Tag:5QinQ Tag:5QinQ Tag:5VlanVlan-5,QinQ Tunnel5,QinQ TunnelQinQ Tag:6QinQ Tag:6QinQ Tag:6VlanVlan-6,QinQ Tunnel6,QinQ TunnelVICVICVICVLAN 11payloadEtherType0 x8100sMAC:AdMAC:B1VLAN 11payloadEtherType0 x8100VLAN 5EtherType0 x8100sMAC:AdMAC:BV
138、NTag2VLAN 11payloadEtherType0 x8100sMAC:AdMAC:B3 2023 Cisco and/or its affiliates.All rights reserved.Cisco PublicVIC Q-in-Q Tunneling VIC QinQ tunneling supports the following VIC 1400/15000 series UCS FI 6400 series and FI 6536UCSM and Cisco IMC with 4.3(2)releaseIMM Support will be post 4.3(2)Con
139、figurations required in UCSMEnable QinQ globally under FI domainEnable QinQ under the VLAN in a vNICSelect Native under a vlan in vNIC for untagged trafficFor standalone fabrics ensure the upstream ToR switch will carry double-tagged 802.1Q framesBRKCOM-229557 2023 Cisco and/or its affiliates.All ri
140、ghts reserved.Cisco Public#CiscoLiveVIC Q-in-Q Tunneling UCSM ConfigurationBRKCOM-22955812Virtual switch(not managed by APIC)VIC QinQ,packet walk example with ACI(L3)Server 1VLAN11VLAN12VM1vNIC0,QinQ:5(on UCS VIC)VM2UCS ServersServer 2Virtual switch(not managed by APIC)VLAN11VLAN12VM3VM410.0.1.1/24M
141、AC A10.0.2.1/2410.0.1.2/2410.0.2.2/24MAC DVLAN 10110.0.1.10/24ACI ConfigurationACI ConfigurationEPG1 vlan 5,11EPG2 vlan 5,12ACI BD subnetsEPG1,BD1-10.0.1.254/24EPG2,BD2-10.0.2.254/24GW MAC:GVLAN 10210.0.2.10/24vNIC1,QinQ:5(on UCS VIC)VLAN 11payloadEtherType0 x8100sMAC:AdMAC:G 1VLAN 11payloadEtherTyp
142、e0 x8100VLAN 5EtherType0 x8100sMAC:AdMAC:G2VLAN 12payloadEtherType0 x8100VLAN 5EtherType0 x8100sMAC:GdMAC:D3VLAN 12payloadEtherType0 x8100sMAC:GdMAC:D4vNIC0,QinQ:5(on UCS VIC)vNIC1,QinQ:5(on UCS VIC)VICVIC 2023 Cisco and/or its affiliates.All rights reserved.Cisco PublicPhysical NIC mode(in Standalo
143、ne Server)Supported on VIC 1400/15000 series with standalone rack servers.This feature is only available with VIC in standalone server.Physical NIC mode disables default priority tagging on VIC vNICsAllows interoperability with switches that dont support priority-taggingOnly default vNICs are suppor
144、ted,no additional vNICscan be created2 vNICs for dual-port MLOM or PCIe rack VIC4 vNICs for quad-port MLOM or PCIe rack VICDisable FIP and LLDP on VIC to enable physical NIC modeVlan-0 or 802.1pDefault VIC behavior for untagged packetWith Physical NIC Mode enabledBRKCOM-229560 2023 Cisco and/or its
145、affiliates.All rights reserved.Cisco Public#CiscoLiveNVMEoF(FC-NVMe/NVMe-ROCEv2/NVMe-TCP)and RoCE Support FC-NVMe,NVMeoROCEv2 and SMB-Direct(ROCEv1/v2)offload is supported in VIC for enhanced performance VIC supports offload of the stateless TCP functions for enhanced traffic performance with NVMe-T
146、CP,but doesnt support full NVMe-TCP offload VIC 15000 series supports higher traffic throughput with improved RoCEv2 offload support and provides greater overall traffic performance.BRKCOM-229561FeatureFeatureVIC 15000VIC 15000FI 6300/FI 6300/6400/65366400/6536Storage ArrayStorage ArrayFCFC-NVMeNVMe
147、w/RHEL,SLESSupportedSupportedNetapp,Pure,EMC,IBMFCFC-NVMeNVMew/ESXiSupportedSupportedNetapp,Pure,EMC,IBMNVMeoRoCEv2NVMeoRoCEv2w/RHELSupportedSupportedPureNVMeoRoCEv2NVMeoRoCEv2w/ESXiSupportedSupportedPureSMBSMB-DirectDirectw/RoCEv2SupportedSupported-NVMeNVMe-TCPTCPw/RHELSupportedSupportedNetapp,Pure
148、NVMeNVMe-TCPTCPw/ESXiSupportedSupportedNetapp,Pure,EMC 2023 Cisco and/or its affiliates.All rights reserved.Cisco PublicPTP SupportSupported on all VIC 15000 series 100ns precisionSupports PTPv1 and PTPv2Supports multicast and unicast PTPSupported with RHELOnly onevNIC per VIC card should be enabled
149、 with PTP Check with“ethtool T Supported in IMM,CIMC,and UCSMBRKCOM-2295625thGen VIC Performance 2023 Cisco and/or its affiliates.All rights reserved.Cisco Public100G Ethernet PerformanceFI 6536 with multiple X9508 chassisX9108 IFM 100GVIC 15231 adapter on every X210c Latest VIC firmware and driverR
150、HEL OS 8.4Adapter policy:Tx=1,Rx=8,CQ=9,Interrupts=10,RSS enabledTx,Rx ring-size of 4K or 16KBIOS policy:defaultOS Tuning:default CPU frequency set to performanceUCSUCS-FIFI-653665368x 100GX9508 ChassisX9508 ChassisFabric BFabric B8x 100GFabric AFabric AX9508 ChassisX9508 ChassisBRKCOM-229564 2023 C
151、isco and/or its affiliates.All rights reserved.Cisco Public#CiscoLive100G vNIC/100G vNIC/vHBAvHBA as seen on RHEL and as seen on RHEL and ESXiESXi hosthost100G vNIC/100G vNIC/vHBAvHBA in X210c with VIC 15231 in X210c with VIC 15231 UCSUCS-FIFI-653665368x 100GX9508 X9508 ChassisChassisFabric BFabric
152、B8x 100GFabric AFabric A100G vNIC100G vNIC100G 100G vHBAvHBA100G vNIC100G vNIC100G 100G vHBAvHBABRKCOM-229565 2023 Cisco and/or its affiliates.All rights reserved.Cisco Public#CiscoLiveSingle Flow 100GSingle Flow 100G:Blade6 vnic1(client)Blade6 vnic1(client)-Blade1 vnic1(Server)Blade1 vnic1(Server)i
153、PerfiPerf results on Fabric results on Fabric A A8x 100GX9508 X9508 ChassisChassisUCSUCS-FIFI-65366536Fabric BFabric B8x 100GFabric AFabric AEnd to End 100G:VIC 15231 performanceEnd to End 100G:VIC 15231 performance100GbpsBlade2ServerBlade6ClientBRKCOM-229566 2023 Cisco and/or its affiliates.All rig
154、hts reserved.Cisco Public#CiscoLive8x 100G8x 100GBiBi-directional Flow 100Gdirectional Flow 100G:Chassis1Chassis1-Blade2 vNIC1 Blade2 vNIC1 Chassi2 Chassi2-Blade1 vNIC1Blade1 vNIC1iPerfiPerf resultsresults8x 100GX9508 ChassisX9508 ChassisUCSUCS-FIFI-65366536Fabric BFabric B8x 100GFabric AFabric AEnd
155、 to End 100G:VIC 15231 performanceEnd to End 100G:VIC 15231 performanceBlade2Client&ServerBlade1Client&Server100Gbps100GbpsBRKCOM-229567 2023 Cisco and/or its affiliates.All rights reserved.Cisco Public#CiscoLiveiPERFiPERF Traffic Traffic:Fabric A:Blade3 vnic1(client)-Blade2 vnic1(Server)Fabric B:Bl
156、ade3 vnic2(client)-Blade2 vnic2(Server)iPerfiPerf resultsresults8x 100GX9508 X9508 ChassisChassisUCSUCS-FIFI-65366536Fabric BFabric B8x 100GFabric AFabric AEnd to End 100G:VIC 15231 performanceEnd to End 100G:VIC 15231 performanceBlade3ClientBlade2Server100GbpsFabric B 100GbpsFabric A BRKCOM-229568
157、2023 Cisco and/or its affiliates.All rights reserved.Cisco PublicFibre Channel PerformanceFI 6536 with multiple X9508 chassis connected to MDS at 32GFC end host modeX9108 IFM 100GVIC 15231 adapter on every X210c RHEL OS 8.4vHBA policy:FC/FC-NVMedefaultBIOS policy:defaultOS Tuning:defaultStorage arra
158、y with 32G adaptersBRKCOM-229569 2023 Cisco and/or its affiliates.All rights reserved.Cisco Public#CiscoLive8x 100G8x 100G32G FC connectivity to a Storage Target32G FC connectivity to a Storage Target:ChassisChassis-Blade2 vHBA1 Blade2 vHBA1-HBA1 Storage(100%read)HBA1 Storage(100%read)ChassisChassis
159、-Blade2 vHBA1 Blade2 vHBA1-HBA5 Storage(100%read)HBA5 Storage(100%read)FIO results per x210cFIO results per x210cX9508 X9508 ChassisChassisUCSUCS-FIFI-65366536Fabric BFabric BFabric AFabric AEnd to End 32G:VIC 15231 performanceEnd to End 32G:VIC 15231 performanceBlade2Host12345678Storage ArrayStorag
160、e ArrayMDS SwitchMDS Switch32G HBA32G HBA32G FC32G FC4x 32G 4x 32G FCFC4x 32G 4x 32G FCFC32G FC32G FC32G HBA32G HBAResult from Storage ArrayResult from Storage Array32G FCFabric A 32G FCFabric B Total 64G acrossFabric A&BBRKCOM-229570 2023 Cisco and/or its affiliates.All rights reserved.Cisco Public
161、#CiscoLive8x 100G8x 100G100G 100G vHBAvHBA performance in a x210cperformance in a x210c:Chassis Blade2 Chassis Blade2 -4x 32G FC storage target(100%read)4x 32G FC storage target(100%read)X9508 X9508 ChassisChassisUCSUCS-FIFI-65366536Fabric BFabric BFabric AFabric A100G FC per Fabric:VIC 15231 perfor
162、mance100G FC per Fabric:VIC 15231 performanceBlade2HostStorage ArrayStorage ArrayMDS SwitchMDS Switch32G HBA32G HBA32G FC32G FC4x 32G 4x 32G FCFC4x 32G 4x 32G FCFC32G FC32G FC1234567832G HBA32G HBA92G FC(100G)per Fabric FIO results per x210cFIO results per x210cBRKCOM-229571 2023 Cisco and/or its af
163、filiates.All rights reserved.Cisco PublicNFS 100GFI 6536 with multiple X9508 chassisX9108 IFM 100GVIC 15231 adapter on every X210c RHEL OS 8.4Adapter policy:1 Tx,8 Rx,9 CQ,10 InterruptsTx,Rx ring-size of 4K or 16KBIOS policy:defaultOS Tuning:defaultStorage array with 100G adaptersBRKCOM-229572 2023
164、Cisco and/or its affiliates.All rights reserved.Cisco Public#CiscoLive8x 100G8x 100G100G NFS connectivity to a Storage Filer100G NFS connectivity to a Storage Filer:ChassisChassis-Blade2 vNIC1 Blade2 vNIC1 4 x 25Gb Breakout(Server Ports)Cisco Patch PanelCisco Patch PanelUCS 6536 FIUCS 6536 FIUCS Cha
165、ssis/Rack serverUCS Chassis/Rack serverLC-LC MMF CableMPO-MPO MMF CableEach Breakout Module support up to three 12F MPO to 4 duplex LC breakouts BRKCOM-229587 2023 Cisco and/or its affiliates.All rights reserved.Cisco Public#CiscoLiveBreakout ModuleRear 12F MPO to 4 x 10G Duplex LC40G100G400G1x 10/2
166、5/100G1x 10/25/100G1x 10/25/100GQSFP-40G-SR4-SQSFP-100G-SL4/SR4-SQDD-400G-DR4-SSFP-10G-SRSFP-25G-SR-SSFP-25G-SLQSFP-100G-DR1x 10/25/100G100G-4 x 25Gb or 40G-4 x 10Gb or 400G-4 x100Gb Ethernet Breakout(Eth Uplink Ports)Cisco Patch PanelCisco Patch PanelFIFI-6536 or Nexus6536 or Nexus-9K(10G/25G/100G)
167、9K(10G/25G/100G)LC-LC SMF/MMF CableMPO-MPO SMF/MMF CableEach Breakout Module support up to three 12F MPO to 4 duplex LC breakouts FIFI-6536 or Nexus6536 or Nexus-9K(40G/100G/400G)9K(40G/100G/400G)BRKCOM-229588 2023 Cisco and/or its affiliates.All rights reserved.Cisco Public#CiscoLiveBreakout Module
168、Rear 12F MPO to 4 x 10G Duplex LC4x32G1x 8/16/32G1x 8/16/32G1x 8/16/32GDS-SFP-FC8G-SWDS-SFP-FC16G-SWDS-SFP-FC32G-SW1x 8/16/32G128G-4 x 8/16/32Gb FC Breakout(FC uplink/storage port)Cisco Patch PanelCisco Patch PanelUCS 6536 FIUCS 6536 FISAN switch or FC storage arraySAN switch or FC storage arrayLC-L
169、C MMF CableMPO-MPO MMF CableEach Breakout Module support up to three 12F MPO to 4 duplex LC breakouts 128G FC QSFPCisco PID:DS-SFP-4X32G-SWPorts 33-36BRKCOM-229589 2023 Cisco and/or its affiliates.All rights reserved.Cisco Public#CiscoLiveDuplex LC-LCPatch Cord2Patch panel PIDs(MMF)1RU1RU(18 MPO x 7
170、2 DX LC)2RU(36 MPO x 144 DX LC)3RU(54 MPO x 216 DX LC)13MPO12-MPO12Trunk CableMPO-LC breakout cableBRKCOM-229590 2023 Cisco and/or its affiliates.All rights reserved.Cisco Public#CiscoLivePatch panel PIDs(SMF)RefCisco PIDCisco Description11PP-72X100G-SMFPATCH PANEL,1RU,18 MPO12-72 DUPLEX LC,SMF,COMP
171、LETE ASSYPP-144X100G-SMFPATCH PANEL,2RU,36 MPO12-144 DUPLEX LC,SMF,COMPLETE ASSYPP-216X100G-SMFPATCH PANEL,3RU,54 MPO12-216 DUPLEX LC,SMF,COMPLETE ASSYPP-1RU-CHASPATCH PANEL,1RU,EMPTY CHASSISPP-2RU-CHASPATCH PANEL,2RU,EMPTY CHASSISPP-3RU-CHASPATCH PANEL,3RU,EMPTY CHASSISPP-CAS-L-12LC-SMFPATCH PANEL,
172、LEFT CASSETTE,3 MPO12-12 DUPLEX LC,SMFPP-CAS-R-12LC-SMFPATCH PANEL,RIGHT CASSETTE,3 MPO12-12 DUPLEX LC,SMF22for 4xN connectionsCB-M12-M12-SMF1MCABLE,MPO12-MPO12,TRUNK CABLE,TYPE B,SMF,1MCB-M12-M12-SMF2MCABLE,MPO12-MPO12,TRUNK CABLE,TYPE B,SMF,2MCB-M12-M12-SMF3MCABLE,MPO12-MPO12,TRUNK CABLE,TYPE B,SM
173、F,3MCB-M12-M12-SMF5MCABLE,MPO12-MPO12,TRUNK CABLE,TYPE B,SMF,5MCB-M12-M12-SMF7MCABLE,MPO12-MPO12,TRUNK CABLE,TYPE B,SMF,7MCB-M12-M12-SMF10MCABLE,MPO12-MPO12,TRUNK CABLE,TYPE B,SMF,10MCB-M12-M12-SMF15MCABLE,MPO12-MPO12,TRUNK CABLE,TYPE B,SMF,15MCB-M12-M12-SMF20MCABLE,MPO12-MPO12,TRUNK CABLE,TYPE B,SM
174、F,20MCB-M12-M12-SMF25MCABLE,MPO12-MPO12,TRUNK CABLE,TYPE B,SMF,25MCB-M12-M12-SMF30MCABLE,MPO12-MPO12,TRUNK CABLE,TYPE B,SMF,30M23for 2x100 connectionsCB-M12-4CS-SMF5MCABLE,MPO12-4X DUPLEX CS,BREAKOUT CABLE,SMF,5MCB-M12-4CS-SMF7MCABLE,MPO12-4X DUPLEX CS,BREAKOUT CABLE,SMF,7M3CB-LC-LC-SMF1MCABLE,DUPLE
175、X LC-LC PATCH CORD,SMF,1MCB-LC-LC-SMF2MCABLE,DUPLEX LC-LC PATCH CORD,SMF,2MCB-LC-LC-SMF3MCABLE,DUPLEX LC-LC PATCH CORD,SMF,3MCB-LC-LC-SMF5MCABLE,DUPLEX LC-LC PATCH CORD,SMF,5MCB-LC-LC-SMF7MCABLE,DUPLEX LC-LC PATCH CORD,SMF,7MCB-LC-LC-SMF10MCABLE,DUPLEX LC-LC PATCH CORD,SMF,10MCB-LC-LC-SMF15MCABLE,DU
176、PLEX LC-LC PATCH CORD,SMF,15MCB-LC-LC-SMF20MCABLE,DUPLEX LC-LC PATCH CORD,SMF,20MCB-LC-LC-SMF25MCABLE,DUPLEX LC-LC PATCH CORD,SMF,25MCB-LC-LC-SMF30MCABLE,DUPLEX LC-LC PATCH CORD,SMF,30M31.Patch panel can be purchased as a complete assembly(Ref:1)which includes chassis and all cassettes,or as separat
177、e components.Patch cables are not included in panel configurations.2.MPO-12 MPO-12 cable connects patch panel to:QDD-400G-DR4-S,QDD-4x100G-FR-S,QDD-4x100G-LR-S3.MPO-12-4x CS cable connects patch panel to:QDD-2x100-LR4-S,QDD-2x100-CWDM4-S1RU(18 MPO x 72 DX LC)2RU(36 MPO x 144 DX LC)3RU(54 MPO x 216 DX LC)12MPO12MPO12-MPO12MPO12Trunk CableDuplex LCDuplex LC-LCLCPatch Cord#CiscoLive