《電信管理論壇(TMF):2025 AI工作負載對電信運營商網絡連接的需求及商業化機遇研究報告(英文版)(37頁).pdf》由會員分享,可在線閱讀,更多相關《電信管理論壇(TMF):2025 AI工作負載對電信運營商網絡連接的需求及商業化機遇研究報告(英文版)(37頁).pdf(37頁珍藏版)》請在三個皮匠報告上搜索。
1、April 2025|inform.tmforum.orgAuthor:Ed Finegold,Contributing AnalystEditor:Dawn Bushaus,Contributing EditorTREND REPORTAI&Data InnovationAI connectivityopportunityMonetizing thecontents03 the big picture07 section 1:making the business case for AI workload connectivity11 section 2:how do AI workload
2、s differ and what do they need from telco networks?17 section 3:how to provide connectivity for AI workloads 23 section 4:how to monetize AI connectivity 28 section 5:make it happen strategies for monetizing AI connectivity 31 additional resourcesWe hope you enjoy the report and,most importantly,fjn
3、d ways to use the ideas,concepts and recommendations detailed within.You can send your feedback to the editorial team at TM Forum via editortmforum.orgbigpicturethe3inform.tmforum.orgThe table opposite,compiled using news reports and publicly available data from the companies,shows the amount each h
4、yperscale cloud provider invested in AI infrastructure in 2024 and their planned spending for 2025,along with the percentage increase.These investments are being made because the demand for AI compute,storage and data management is outstripping supply.The question telecoms operators worldwide must a
5、sk themselves now is how can they tap into and grow their AI connectivity business?Everything AI-based systems do,whether chatbots or autonomous agents,results in data moving from place to place,such as between an enterprise and a cloud-based AI platform or between two or more cloud-based platforms.
6、Becoming a player in this market,however,is likely to involve substantial and immediate changes to how communications service providers(CSPs)roll out networks,operations and monetization capabilities.Not long after OpenAI,Oracle,Softbank and the UAE governments tech investment unit MGX announced Sta
7、rgate their plan to pour$500 billion into AI infrastructure in the US global hyperscalers followed suit by announcing historic capital expenditure increases specifically for new AI technologies.Alphabet,Amazon,Meta,Microsoft,and Oracle plan to increase their spending by between 20%and 100%in 2025,fo
8、r a collective investment of nearly$335 billion this year.4inform.tmforum.orgHyperscalers current and planned capital investment in AI infrastructure(US$billions)Company20242025%Change$52.5$83$40$56$6.9$75$100$65$80$13.8+43%+20.5%+62.5%+43%+100%TM Forum,20255inform.tmforum.orgThe AI workload busines
9、s caseAI platforms take data in,bounce it off their models and spit out results as projections,recommendations or automated commands in other words,more data.These tasks are referred to collectively as“workloads”,an unimaginative term borrowed from computer-science textbooks that describes a system
10、taking in data,processing it and producing a result.To move workloads around in the AI world,reliable connectivity that performs to specifications is required.But there is just not enough of it.“The absence of state-of-the-art technology in markets around the world presents a significant opportunity
11、 for investment and transformation,says Chris Penrose,Global Head of Business Development for Telco,at Nvidia.State-of-the-art,in this case,means infrastructure that has sufficient capacity,if not dynamic capability,to support AI workloads efficiently.As a result,connectivity is becoming a major con
12、straint in the AI food chain.A significant time of the GPU process is spent idle waiting for the network,says BT Business CTO Colin Bannon.He explains that customers simply expect the network to do its job without interrupting their goals.With AI workload traffic,operators still“need to better under
13、stand jitter,latency,idiosyncratic flows”,Bannon explains,lamenting the fact that“people blame the network when the app goes wrong”.However,whether telecoms operators are prepared or not,“AI is escaping onto our networks,”he says.But CSPs should not carry the traffic and take the blame for problems
14、without receiving the credit they deserve for making AI work.“Telcos need to do a better job of explaining where they sit in the value equation,”argues Bannon.Operators have an opportunity to express and communicate their value as connectivity providers in the AI world,but there are important gaps t
15、o address.“The era of edge,the ability to distibute,geolocate,chain,compose and orchestrate AI all of this is going to happen,and thats where the telcos will have the power.Our day in the sun has not yet arrived,”says Guy Lupo,TM Forums EVP of AI&Data Innovation,who previously led network-as-a-servi
16、ce(NaaS)transformation at Telstra.“Model direct-connect is about to come to life,”Lupo says.This refers to growing demand among enterprises to have more robust and direct or dedicated connections between their own operations and their AI platforms,typically in the cloud.These connections transport A
17、I workloads for both training and inference purposes.“Network operators will either be monetizing this by creating a service that gets into this direct-connect business or be left behind,”says Lupo.There could also be a“model-to-model”connectivity opportunity to transport data across AI platforms.Mu
18、ch of this opportunity is not currently considered addressable for telcos because the traffic is assumed to be intra-data center rather than inter-data center,where connectivity services are required.But this could change as more common AI assurance,security and sovereignty control are required thro
19、ugh legislation,positioning CSPs as potential vendor-agnostic providers.Network operators will either be monetizing connectivity by creating a service that gets intodirect-connect business or be left behind.”Guy Lupo,TM Forum6inform.tmforum.orgMonetizing connectivity made for AITo reap rewards in th
20、e AI connectivity market,CSPs will need to be strong on user experience and metering in ways hyperscalers customers take for granted.Lupo argues that the big cloud providers are pushing for two things:first,to centralize all AI traffic onto their platforms;and then to pass the rising cost of electri
21、cal power on to customers.One way in which hyperscalers have attracted AI traffic to their networks is through their metering infrastructure,called out as a revenue driver in earnings announcements throughout 2024 by companies including Microsoft,Amazon and Alphabet.Enterprises use cloud platforms i
22、n-built metering to measure their own data traffic flows on and off AI platforms.To push back against this move toward AI centralization,Lupo argues that telecoms operators will need to provide a self-managed,direct-connect model,“but where we include the cost of bits and tokens,and bundle that into
23、 the existing connectivity cost.CSPs will develop new plans that allow customers to balance their symmetrical network costs,or achieve certain elasticity based on token usage or consumption,he says.The move to a metered,per-token network will not require a major restructure of network services,Lupo
24、adds,Instead,he says,operators can use an existing construct of virtual private network or SD WAN services to overlay AI VPN metered services to create an immediate opportunity.Read this report to understand:What AI workloads are The business case for building networks optimized for AI workloads The
25、 addressable markets and growth prospects for telcos What special operations capabilities are needed to run networks for AI How monetization and customer experience must change to meet customers expectations in the AI connectivity market Why autonomous networks are key to realizing AI connectivity r
26、evenue Why business support systems(BSS)need to change to monetize AI connectivity.To reap rewards in the AI connectivity market,CSPs will need to be strong on user experience and metering in ways hyperscalers customers take for granted.making the business case for AI workload connectivitysection 17
27、inform.tmforum.orgThe AI boom is in its infancy,and many early clues suggest that an explosion in workload volumes will take place.This is helping companies like Nvidia pitch the idea of networks optimized for AI workloads.Nvidia,which makes graphics processing units(GPUs)and chip systems,and its pa
28、rtners including T-Mobile US and Softbank,have been selling the idea of“AI-ready”networks to operators in the form of AI radio access networks(or AI-RAN as its dubbed by the AI-RAN Alliance).This was a hot topic at TM Forums Innovate Americas event held in September 2024.Nvidias business case for re
29、fitting 5G RANs into networks that self-optimize and transport AI workloads on demand is built on several core assumptions that apply to any AI-ready network RAN,multi-access edge computing(MEC)or otherwise:Network over-provisioning is a thing of the past.Networks must be self-optimizing and able to
30、 call for or accept workload traffic whenever resources are available while maximizing utilization of network resources.Connectivity services must be accessible on demand.This necessarily means via API and from the pre-existing network platform.The network is a platform.It needs to provide service a
31、ssurance and metering on a per-service basis,as hyperscale cloud services do.8inform.tmforum.orgAI cant fmourish unless someone provides intelligent and dynamic connectivity.The user experience is a cloud portal,and maybe not yours.Users expect self-serve,on-demand access,potentially via portals the
32、y already prefer and use every day including independent software vendors and hyperscale cloud vendors portals.The whole setup is developer friendly.The most sophisticated and valuable consumer is the developer,and they should be catered to with APIs,tools and experiences suited to the work.Workload
33、s are ever abundant.The entire business case assumes that demand for capacity to transport workloads will continue to outpace supply for quite some time.This set of assumptions might also be interpreted as requirements for market entry,but it creates conflicts for most operators.Few if any telecoms
34、networks can do any of these things at scale,much less all of them.And AI cant flourish unless someone provides this type of intelligent and dynamic connectivity.9inform.tmforum.orgPositioning the network propositionNvidias Penrose believes that“telecoms operators are uniquely positioned because the
35、y have the connectivity piece that no one else has”.But they need to provide networks that can self-optimize collective resources based on demand and fulfill more than one purpose,which is not how networks are currently designed.Today,most are purpose-built,delivering pre-reserved connections in a m
36、odel that resembles nailed-up circuits.This requires networks to be overprovisioned and underutilized to accommodate peak loads.“Its like flying a plane with first class sold but the rest of the plane available,”says Penrose.He adds that hes seeing telecoms providers average utilization in the 30%to
37、 40%range and questions whether it makes sense to build networks“that can only do one thing”as opposed to establishing an application-ready infrastructure through building networks capable of doing multiple things.Nvidia urges telcos to look at converting RANs from systems designed only for RAN use
38、to AI RANs that use“a common GPU-based infrastructure that can run both wireless and AI workloads concurrently”.In plain terms,the business case is built on the assumption that AI workloads are,and will continue to be,abundant.A recent Nvidia survey found that 84%of telco respondents believe AI is a
39、lready contributing to increases in their annual revenues.A full 97%said their companies are actively assessing and deploying AI in productivity,customer experience and network operations roles.Research firm Omdia tracks the AI connectivity market and projects a cautiously optimistic outlook for CSP
40、s.Omdias inaugural AI Network Traffic Forecast:2022-30 found that while“AI-enriched interactions”churned out 63 exabytes of network traffic per month globally in 2023,this value will grow nearly twenty-fold to 1,226 exabytes per month by 2030.This will represent 64%of all global traffic,versus rough
41、ly one-third today.The payloads that will drive much of this volume initially will be images and video because“image analysis(and response)are key AI activities”,Omdia contends.In subsequent phases,however,more sophisticated consumer and B2B applications will enter the market to further increase the
42、 pace of growth in AI traffic volume,including applications where machines analyze and create digital video.Brian Washburn,Research Director,Service Provider Enterprise&Wholesale at Omdia,expects that in this initial wave of AI rollouts enterprises will deploy many simple applications that utilize v
43、ideo,image and audio capture to replace manual tasks.“Its kind of dumb AI like watch the assembly line and make sure nothing drops off,”he explains.In the abundance of practical use cases the network traffic will come mostly from transporting video and images to different locations for analysis,he s
44、ays.For CSPs,“AI is a force that can turn a declining B2B revenue market back to a modest increase or state of growth,”says Washburn.However,they might expect less inter-domain traffic over the long run,he adds,because it“will get throttled,even just by cost”as users determine how much“you really ne
45、ed inferencing and ongoing learning for things that do basic autonomics”.But even if users do throttle traffic to control costs,an overall rise in demand for AI-based solutions could provide revenue growth for telcos,along with more specialized requirements that can be monetized,such as low latency,
46、quality of service,redundancy and security.Sizing the AI connectivity opportunity for telcos10inform.tmforum.orgSoftbank,Red Hat and Fujitsu are now piloting AI RAN using Nvidias GPU technology,though full-scale rollout is not planned until 2026.“Dynamic AI workload management”is a key capability Nv
47、idia calls out within this pilot to allocate and utilize network resources efficiently“based on real-time demands”.Nvidia claims that local wireless infrastructure can provide“an ideal place to process AI inferencing”.Sreedhar Rao,Telecom CTO at Snowflake,an AI data cloud company,says part of the re
48、ason Nvidia has chosen to focus its message around 5G RAN is because“its talking about autonomous capabilities with closed-loop automation,which could already be done”.He notes that 5G is more friendly to autonomous networking(AN)than other mobile architectures because it uses a componentized networ
49、k model(see section 3).5G is more friendly to autonomous networking than other mobile architectures because it uses a componentized network model.Rao adds that Verizon has demonstrated a similar advantage in retooling its multi-access edge computing(MEC)networks for AI workloads.Srini Kalapala,Veriz
50、ons Senior Vice President of Technology and Product Development,explained at the time of the companys announcement how Verizon is combining Nvidias AI compute platform with its private networks to enable“real-time AI applications that require security,ultra-low latency,and high bandwidth”.In the nex
51、t section we look at AI workloads and what each demands from telcos connectivity networks.how do AI workloads difger and what do they need from telco networks?section 211inform.tmforum.orgTo understand the business case for retooling networks or building new ones to support AI workloads,its useful t
52、o understand what AI workloads are,how they vary and how theyre changing.Its also important to understand what the introduction of different types of AI will do to the volume and pace at which new workloads are created,and what kinds of demands they can put on networks.Technically,a workload is how
53、much processing is needed for a computer system to perform a given task like running an application,facilitating a transaction or managing data storage.Some types of general computing workloads include:Batch workloads,where tasks are processed automatically in batches Transactional workloads,like th
54、e many database transactions that happen in business operations Interactive workloads,which require live user interactions such as through web and mobile apps Analytical workloads,which can include data analysis,mining and business intelligence operations.But AI introduces many new types of workload
55、s,with more varieties being created all the time.For example,training and using AI models generates workloads for data processing,machine learning and deep learning,among others(see graphic opposite).12inform.tmforum.orgAI introduces many new types of workloads,with more varieties being created all
56、the time.TM Forum,2025Types of AI workloadsTrainingAI models are trained to identify patterns and make decisions and predictionsDeep learningUsed when training and deploying neural networks to recognize images,speech and natural languageInferenceCatch-all term for using a trained AI model to analyze
57、 data and make decisions and predictionsNatural language processingProcesses that make chatbots,virtual assistants and translators easy for humans to interact withData processingHandling,cleansing and prepping data prior to running analyses or training AI modelsGenerative AIUsed to create content li
58、ke articles,art and synthetic data autonomouslyMachine learningUse of algorithms that can learn from data to make and improve predictionsVision capabilitiesInterpreting visual data like images and video for face recognition and object detection13inform.tmforum.orgTwo of the most discussed categories
59、 are training and inference workloads.These terms are often used generically to equate many different types of AI workloads,but in reality the workloads in these categories can vary significantly in terms of complexity and size.For example,an AI that provides text-based language translation of email
60、 uses a simpler process on a much smaller payload than,say,an application that monitors live video to identify public safety threats.Training and inference workloads do the bulk of the actual computing work behind AI-powered applications.Network and compute capacity are both needed to transport and
61、process the data in these workloads,hence the assumed demand for networks that support AI architectures specific needs with connectivity and GPUs.Nvidias business case argues that inference workloads represent the greater market opportunity for telecoms providers.But many types of both inference and
62、 training workloads can present opportunities for CSPs.In the rest of this section,we look more closely at some of them.Training workloadsAI training is the process of teaching an AI system to perceive,interpret and learn from data.In the first phase of a training process the model is fed massive am
63、ounts of data and is then asked to make decisions based on it.The training part involves adjusting the model until it produces satisfactory results.Training workloads can vary significantly based on what an AI system is being trained to do(see graphic above).TM Forum,2025Examples of training workloa
64、dsNatural language processingUsed in chatbots,search and any other application relying on an AI-based natural language interface Time series analysisUsed for financial reports and other projections,often for decision-making Video recognitionUsed in safety monitoring,AIs spot threats by identifying s
65、pecific objects in live videoAnomaly detectionUsed in cybersecurity,fraud detection and anti-money laundering to spot and stop theft and hacksSpeech RecognitionVoice interfaces have been popularized for hands-free work and are evolving with AI-based automationRobotic process automation(RPA)Automates
66、 increasingly complex and asynchronous tasks and processes across enterprisesComplex AI training can also involve multi-modal learning,where AIs can take inputs from a variety of data such as audio,video,images and text to inform automated decisioning.Training may also be conducted by reinforcement,
67、where an AI is put through a series of tests to learn by trial and error.Its important to note that training is continuous.Though initial training may represent a spike in data being loaded into an AI model,subsequent training and re-training for continuous improvement is a part of any AIs lifecycle
68、.Many types of inference and training workloads can present opportunities for telecoms service providers.14inform.tmforum.orgInference workloadsInference is a generic term used to describe AI workloads generated by the day-to-day business of a production AI system.Every time the AI ingests and analy
69、zes new data that is not training data,those are inference workloads.Such workloads take many forms to accomplish different tasks.GenAI workloadsGenerative AI(GenAI)models generate different types of workloads depending on which tasks they are being applied to.For example,an application such as a Ge
70、nAI chatbot used to support customers will create workloads to automate and complete its customer-facing tasks.Agentic AI drives more workloadsAgentic AI is garnering attention for its promise to enable more complex AI-based automation.Such systems are designed to autonomously perform tasks,make dec
71、isions and solve complex problems with minimal human intervention,essentially behaving like independent agents that can take action based on their own understanding of a situation rather than relying on instructions from humans(see graphic on the next page).Soon,the market that comprises largely Gen
72、AI applications like chatbots will likely grow to include more complex uses for agentic AI,such as dynamically allocating and balancing network resources or simplifying,accelerating and automating systems integration.Indeed,if anything can spur AI workload volumes beyond analysts cautious projection
73、s,it could be agentic AI.Anecdotal estimates suggest it could drive the demand for network connectivity to rise even Examples of inference workloadsLanguage translationAutomatic and real-time translation of language such as in emailsFraud protectionReal-time anomaly detection and communication,such
74、as a bank warning you that solicitation for a real-time payment looks like a scamRecommendationsProviding suggestions based on past interactions,such as recommending a new movie based on those youve watched previouslyObject detectionAn AI-enabled video system detects a problem with an object such as
75、 a loose part that might put a manufacturing line at riskTM Forum,2025Examples of GenAI workloadsQuery resolutionAnswering FAQs about products,services,processes and policiesSentiment analysisDetecting customer sentiment to determine incentives or call routingPersonalized recommendationsProviding a
76、customer with data-driven product and service recommendationsOrder managementManaging interactions relating to order status,cancellations,additions and changesData collection and reportingCollecting and analyzing customer interaction data and distributing the resultsTM Forum,2025faster than GenAI,be
77、cause it may generate three to five times as many workloads,which would translate to an equal increase in network traffic.If anything can spur AI workload volumes beyond analysts cautious projections it could be agentic AI.15inform.tmforum.orgFor example,lets say a static GenAI solution that analyze
78、s customer and product data to make smart recommendations generates 100 workloads to achieve its tasks.Agentic AI could generate 300 to 500 workloads when performing the same role because it would also monitor customer behavior,use more data sources to build and maintain customer profiles,adjust rec
79、ommendations continuously,trigger automated processes like issuing rewards for loyalty milestones,and perform continuous-improvement processes.Steve Ruben,an AI developer and blockchain patent holder,believes a driver of the increase in workloads is that AI agents“will always be working in the backg
80、round”.Their tasks will go well beyond commands like:“Write me a fancy paragraph”,he says,adding that“multi-agent collaboration will only drive demand exponentially as agents begin to orchestrate much larger tasks”.These can range from simple automation like calendar and travel management to autonom
81、ously managing and optimizing complex 24x7 production operations.But its important to note that its early days for agentic AI.Gartner predicts it only will reach the top of its Hype Cycle the five phases the analyst company says a new technology typically goes through as it matures and is adopted th
82、is year.Dynamic provisioning and load balancingSome real-world applications for agentic AI are already interesting CSPs,including the same type of dynamic provisioning and load balancing thats needed for AI workloads.Networks that run AI workloads increasingly are expected to be able to provision ne
83、w connections dynamically while simultaneously load balancing network traffic and optimizing resource utilization.Accomplishing this requires a variety of workloads that vary in scope and complexity to be processed(see graphic above).Agentic AI will produce more sophisticated workloadsLogical reason
84、ingDecomposes complex problems and executes multiple tasks in a logged process Ethical reasoningEvaluates actions and decisions against moral and ethical rules before acting Dynamic decisioningAssesses and adapts to new information,environments or goalsContinuous improvementSeeks out new data and me
85、thods to improve its own performanceTM Forum,2025Examples of dynamic network provisioning and load balancing workloadsReal-time monitoringAnalyzes network traffic,performance and usage by collecting and analyzing data from network nodesDynamic resource allocationManages how network resources are uti
86、lized in real time and responds to predicted and measured changes in demandPredictive analysisMachine learning models forecast traffic changes and recommend resource optimizationsAutomated responseAutonomously executes traffic re-routing,instantiates virtual network functions and reallocates other n
87、etwork resourcesTM Forum,202516inform.tmforum.orgSystems integrationTelecoms providers have long faced challenges in integrating multiple IT and network systems from different vendors,and they believe agentic AI may have a role to play in simplifying many difficult tasks.This,too,results in a variet
88、y of dynamic workload types (see graphic opposite).For all the examples in this section,the more sophisticated and large-scale the AI solution,the more workloads will increase.CSPs need to figure out how to deliver and monetize such connectivity.The next section looks at some of the network and oper
89、ational changes needed to provide connectivity for AI workloads.Examples of systems integration workloadsSystem analysis and mappingAnalyzes system structure and interfaces like APIs as well as data formats and communications protocolsData transformationParses,reformats and converts data to facilita
90、te integration while ensuring data integrity Autonomous developmentCreates and deploys integration logic to connect systems,such as designing data flows and configuring middlewareTesting and validationAutomatically tests integrations for functionality,performance and security while finding and fixin
91、g issuesContinuous improvementAnalyzes logs and other aspects of integration performance to drive code updates and other improvementsTM Forum,2025Agentic AI may have a role to play in simplifying the integration of multiple IT and network systems from difgerent vendors.17inform.tmforum.orgsection 3h
92、ow to provide connectivity for AI workloads 18inform.tmforum.orgCSPs are already running some AI workloads on their networks,but doing the kind of dynamic,automated optimization that Nvidias“AI-ready network”envisions requires them to implement autonomous networks(ANs).The changes wont stop there th
93、ough.Indeed,the biggest challenge may lie in the fact that many of the new requirements for delivering AI connectivity at scale are still unknown.The AI-ready network concept requires that CSPs reach at least Level 4 AN,based on the taxonomy developed by TM Forums Autonomous Networks Project(see gra
94、phic opposite).A key Level 4 capability is the ability for the network to make predictive and independent decisions relating to network health and resource utilization.At Level 4,networks can respond to real-time needs like a spike in demand for AI workload processing and optimize themselves across
95、network and vendor domains using closed-loop management.The goal of an autonomous network is to operate independently and reduce(or eliminate)the need for human involvement.Where traditional automation uses static,pre-defined rules to drive repetitive tasks,autonomy involves intelligent systems maki
96、ng independent decisions dynamically.Highly autonomous networks Levels 4 and 5 require the use of AI and machine learning,and they should be able to learn and improve their operations over time.TM Forum,2024Autonomous network levelsFully autonomous network:The system has closed-loop automation capab
97、ilities across multiple services,multiple domains(including partners domains)and the entire lifecycle via cognitive self-adaptation.Highly autonomous network:In a more complicated cross-domain environment,the system enables decision-making based on predictive analysis or active closed-loop managemen
98、t of service-driven and customer experience-driven networks via AI modeling and continuous learning.Conditional autonomous network:The system senses real-time environmental changes and in certain network domains will optimize and adjust itself to the external environment to enable,closed-loop manage
99、ment via dynamically programmable policies.Partial autonomous network:The system enables closed-loop operations and maintenance for specifjc units under certain external environments via statically confjgured rules.Assisted operations and maintenance:The system executes a specifjc,repetitive subtask
100、 based on pre-confjguration,which can be recorded online and traced,in order to increase execution effjciency.Manual operations and maintenance:The system delivers assisted monitoring capabilities,but all dynamic tasks must be executed manually.014325TM Forum aims to help CSPs build networks that se
101、lf-manage,adapt to customers business goals,or“intents”,and optimize operations in real time.The AN Project is addressing this set of challenges by developing an AN target architecture,which includes guiding architectural principles,a business architecture,and technical and formal reference architec
102、tures.AI network requirementsBut for networks to be suitable for AI workloads they will require other characteristics and capabilities that are only beginning to emerge.BTs Bannon says even after looking closely at primary traffic sources like the hyperscalers data centers and the traffic they are d
103、riving,“a lot of it is still extrapolation”.He adds:“Until the models stabilize a bit,no one will have a very good view of this.”In other words,providing connectivity for AI workloads is very new territory.On the next page we highlight projects,carried out by TM Forum members,that demonstrate AN cap
104、abilities needed for future AI connectivity.And in the rest of this section we outline some of the key requirements and challenges that CSPs expect in relation to connectivity for AI workloads.19inform.tmforum.orgMany TM Forum Catalyst proofs of concept have demonstrated AN capabilities.For example:
105、The AI-Driven Autonomous Network(AIDEN)Catalyst,now in its second phase,exhibits Level 4 AN capability enabled by autonomous agents.These specialized agents co-operate with each other using GenAI to diagnose,heal and optimize the network while learning from incidents,or faults.As a result,this appro
106、ach automates the incident-resolution processes to improve mean time to repair(MTTR)and customer satisfaction(CSAT)while reducing operational costs.The Autonomous networks hyperloops Catalyst,which evolved through five phases,looked at how to enable smart vertical use cases such as distance learning
107、,farming,stadiums and manufacturing.In later phases the team showed how to use intent,closed loops,5G network slicing,AI and digital twins to deliver pop-up mobile networks in the event of a natural disaster.Most recently it demonstrated how telcos can provide a virtual command center(VCC)as a servi
108、ce to centrally monitor and manage critical infrastructure and resources such as power grids,transportation,networks and healthcare facilities.The Achieving autonomous networks:Evolution towards full autonomous networks Catalyst,a new project which will be demonstrated at DTW-Ignite 25 in June,is ai
109、ming to push the boundaries of digital operations by demonstrating a fully autonomous network at Level 5.By combining AI,AN and digital twins in a single interface,this project aims to provide the zero-touch customer and user experience models that are expected in the AI world.Enhanced observability
110、,knowledge graphs and AIOps-driven closed loops will enable the network to detect,diagnose and resolve issues autonomously.Learn more about autonomous networks:1 Back to ContentsObjective:to highlight a blueprint for progression toward Level 4 autonomous networks,with a particular focus on high-valu
111、e scenarios.AN JOURNEY GUIDELevel 4 industry blueprint high-value scenariosNOVEMBER 2024NetworksAutonomousAutonomousnetworks:in search of best practiceDecember 2024|www.inform.tmforum.orgSponsored by:Authors:Richard Webb,Senior AnalystDawn Bushaus,Contributing AnalystEditor:Ian Kemp,Managing Editor
112、Autonomous network operationsBENCHMARK REPORT20inform.tmforum.orgNetworks for data in motionBannon admits that whether telecoms operators want to provide connectivity for AI or not,they will be forced to.AI has not been launched onto the network,it has escaped onto it.Indeed,AI workloads are already
113、 running on telco networks without having been launched as a specific service or capability.As a result,Bannon foresees that telcos“will need new capabilities and network to deal with all the data movement data at rest,but now especially for data in motion”.“BT abstracted every port and protocol so
114、we can software define them and have software-defined control on every hop in the route,with intent-based central path computation,he says.This type of fine control over where data is routed and how it is handled will likely prove critical,especially for applications which deal with data requiring c
115、ertain levels of security and privacy or which,because of regulatory or legal requirements,cannot leave certain geographic jurisdictions in the course of being analyzed by AI platforms.Snowflakes Rao adds that demand for data within large organizations continues to grow as users realize they can req
116、uest and potentially receive previously inaccessible data from other organizations if they know what to request.In many cases“data is only known to the creators of that data”,he says,but AI is accelerating demand for data sharing“in something like an enterprise marketplace”which can“automate the abi
117、lity to see and use that data”.Determinism is neededGetting to this level of specificity highlights the need for deterministic networks,which Bannon says are“a prerequisite for AI to work.Slow is the new down,he adds,meaning that slow connectivity is as bad as none in a low-latency scenario.A determ
118、inistic network,as defined by IETF,is one that can support real-time,low-latency and low-loss data flows for applications like streaming video,factory and farm automation,and autonomous vehicle operations any of which is likely to integrate AI components.Determinism is not how most telecoms networks
119、 work today,however.“Operators current infrastructure is not provisioned in a way that you can use spare capacity,”explains Rao,“so there needs to be a big change in how they deploy infrastructure in the first place”to arrive at a deterministic network that can also self-optimize based on AI and oth
120、er traffic.Separate AI network domains per applicationBannon adds that operators may not be able to expect to operate a single network fabric to support AI traffic.Rather,they may require“separate platforms or very distinct resource pools across markets,because you arent going to have your national
121、critical infrastructure running from the same pool”as consumer applications,for instance.AI workloads are already running on telco networks without having been launched as a specifjc service or capability.21inform.tmforum.orgWatch out for automation silos“Automation tends to happen within vendor sil
122、os,”says Rao.In the rush to roll out AI pilots,many operators created vendor silos for autonomous and intent-based network capabilities,but the“intents dont work together”,he explains.As a result,some operators achieved pieces of intent and autonomous capabilities,“but its a silo again”.Creating ano
123、ther set of disconnected silos to run AI networks would be counter to delivering a self-optimizing network fabric for AI workloads.Support for local modelsAI users today are most familiar with the large,cloud-based models such as Open AIs Chat GPT and Googles Gemini.But developers working with these
124、 models and others have identified a real need for hybrid architectures that combine cloud-based AI models with models operating locally because of the cost.“If Im a business of any size with a decent number of people and I want to augment them with AI from the cloud,my cost per token is not cheap,”
125、says AI developer Ruben.“I firmly believe the solution to what we are butting up against is in the local model.”(We discuss the cost of AI connectivity more in the next section.)Bannon believes that local models will need to come into play for other reasons as well:“Because of data sovereignty or be
126、cause companies will want to hold onto some inference in a local modelbecause people dont want to give up their carefully curated data set”to a third party,he says.Hybrid local-cloud AI models for developersRuben,who helped implement a large-scale,open-source production API for a major US mobile ope
127、rator to provide live decisioning and personalized offers,agrees with Bannon and adds that his own desire to utilize a local model for developing and operating AI applications came down to having basic,model-derived functionality available even if connectivity was down.“We went from 1 million to 3 m
128、illion lines of code in a few months because the AIs learn how we code and anticipate and augment our work with strong code,”Ruben explains.One day,however,“we had ChatGPT go dead,and I realized I only needed a small model running on my PC”,he adds,explaining that inferencing could be done on the lo
129、cal model with only periodic updates made against the master,cloud-based model to correct for drift,which is the gradual decline in the performance of an AI model over time because of changes to data.Creating another set of disconnected silos to run AI networks would be counter to delivering a self-
130、optimizing network fabric for AI workloads.22inform.tmforum.orgRuben adds that better economics offered by new platforms like DeepSeek,which shocked the AI market with its latest release in January,will make“the ability to distill a local model that meets your specific needs very realistic”.The idea
131、l setup would be a small,“multi-agent”model that can refer to a larger model in the cloud,Ruben says.But“the larger model I would pay for would be just to close gaps in the local models”,he adds,referring to drift.This would help control how much AI traffic leaves the local network environment and i
132、ncurs cost by calling out to a cloud-based service for model updates or moving data there for inferencing.Next,we look more closely at monetization of AI connectivity and the implications for metering,billing and customer experience.Better economics ofgered by new platforms like DeepSeek will make“t
133、he ability to distill a local model that meets your specifjc needs very realistic”.Steve Ruben,AI developerhow to monetize AI connectivitysection 423inform.tmforum.orgOn top of rethinking how their networks operate,CSPs will need to change their business support systems(BSS)and their approaches to c
134、ollecting and metering usage data in order to monetize AI workload connectivity.At the same time,they must change the way customers perceive the value of telecoms services,which could be the toughest job of all.24inform.tmforum.orgFor AI connectivity services,as well as other B2B connectivity ofgeri
135、ngs,cloud-style metering,monetization and associated customer experience is a baseline expectation.“Do not forget the network in the stack of resources and infrastructure needed to support this AI boom,”insists BTs Bannon.“Its not surprising that the AI people,app people and cloud people dont talk a
136、bout network.Its not part of their revenue model.Theres nothing in it for them,and they dont want to talk about the complexity.”Telcos need to learn from their past mistakes.“When we look at MPLS and Ethernet classes of service,telcos never effectively monetized that,”says Omdias Washburn.A better a
137、pproach,he argues,might be to“do things on a business outcome basis”,where large customers state goals and desired outcomes and the operator determines how best to meet them.Then,a bill should be provided for the total value of the solution,rather than just price-per-connection.Theres a parallel arg
138、ument that says if something is exposed via API it can,and maybe should,have a price tag associated with it.More specifically,connectivity characteristics like low latency,redundancy,uptime and quality-of-service add value to the end-to-end equation of whatever is being achieved when using cloud-bas
139、ed AI platforms.Some of these capabilities are being addressed through efforts like the GSMAs Open Gateway initiative,where TM Forum is working with GSMA and the CAMARA project to open telecoms networks to developers.But most telcos do not currently have the ability to meter,rate and provide real-ti
140、me usage and cost data on a per-service basis not even in 5G networks.“The missing piece is on the billing side,”says Snowflakes Rao.Indeed,several of the experts interviewed for this report agreed that for AI connectivity services,as well as other B2B connectivity offerings,cloud-style metering,mon
141、etization and associated customer experience is a baseline expectation.That means delivering self-service portals that provide tools and interfaces for metering all usage data and providing it in real time to users.Users in turn need to be able to drill to any level of the data stream to monitor usa
142、ge,performance and cost.25inform.tmforum.org“Enterprises are demanding this now,”says Rao.“Operators cant be a black box on billing.”But he acknowledges that monetizing connectivity for AI in the way that cloud hyperscalers monetize compute and storage services is“a heavy lift for telco operators”an
143、d will“require massive realignment on how they collect,meter and surface that data,and show price and performance metrics to end users”.BTs granular approach“We are going to be on the fractional,marginal charging side,”says BTs Bannon,meaning capabilities that allow usage to be metered down to the m
144、ost granular levels.He adds that getting the“qualitative bits right”,or translating the quantities measured into the correct business metrics,is how CSPs would implement charging for outcomes and experiences.Bannon compares the idea of providing and charging for additional capabilities with AI conne
145、ctivity to peering.For example,where enterprise customers want better classes of traffic handling for their latency or security-sensitive AI workloads,peering customers“want to be on-net with few hops”and are willing to pay more for that value.The same would apply in wholesale and partner services w
146、here a“microservices-based charging and colocation charging model”would be used.That would make it clear that“if you want to improve that service and partner around it,theres a cost to that”,Bannon says.So,the value added to the solution is reflected in the total cost and distributed across the user
147、 base through metering usage of the added components.Along with this approach,Bannon suggests that CSPs should communicate to the market that AI platforms and other key business platforms from major software vendors“work better on our network”especially AI.Factoring in energy costsA major question a
148、round AI is how to power it,which is followed just as quickly by:“Who will pay for the power?”TM Forums Lupo believes the major AI platform providers plan to pass power costs on to customers and that telcos will need to follow suit.Microsoft CEO Satya Nadella addressed this issue during his keynote
149、address in London at Microsofts AI Tour in September 2024,advocating for metering and billing on a“tokens per watt per dollar”basis,particularly as AI models increase in scale and complexity.Tokens,which are chunks of text that might be as short as a character or as long as a word,are the baseline m
150、etered currency of the AI world.AI platform providers may account differently for what makes up a token,and they may charge different rates as shown in the graphic on the next page.Typically,the cost is calculated per million tokens.Figuring out how much electric power costs will add to the cost per
151、 million tokens is tricky,but we describe one approach in the box on page 27.January 2025|www.tmforum.orgAuthor:Ed Finegold,Contributing AnalystEditor:Dawn Bushaus,Contributing Editorsponsored by:how NaaS isdriving theevolutionof telcooperationsOctober 2024|www.tmforum.orgMeetingexpandedexpectations
152、Author:Ed Finegold,Contributing AnalystEditors:Ian Kemp,Managing EditorDawn Bushaus,Contributing Editorsponsored by:for networkchargingLearn more about network charging and network APIs:26inform.tmforum.orgBased on Nadellas statements,Lupo believes the hyperscale community is“prepping you to pay for
153、 power”.The risk for CSPs is that they will transmit increasing upstream data into AI platforms like Microsofts,which in turn will“roll the power down to our customers,who will be upset and ask for the network to be cheaper”,says Lupo.He adds that in such a scenario,telcos could be“pushed out of the
154、 picture”as hyperscalers eat into what enterprises spend with CSPs for connectivity and related services.“What are you going to do about it?”asks Lupo.He says the answer can reside in the fact that the“AI network is the business of metering and assurance”and just as AI providers anticipate needing t
155、o bill for power,so will telecoms.“The way we will have to bill for an AI-ready network is bits per token per wattage per dollar,”Lupo says.This approach does not equate all tokens or workloads.Instead,it looks at the actual bits being transported and their associated power consumption costs to arri
156、ve at a billable metric that works with the token-based approach hyperscaler AI platforms use.By accounting for the fluctuating cost of power and metering the bits associated with tokens or workloads,telcos would focus monetization on how and to what degree their resources are being consumed while a
157、lso protecting themselves against the underlying cost of power to run AI networks and data centers.Price(US$per 1M tokens)26.364.431.91.91.61.410.60.30.2TM Forum,2025(source:https:/artificialanalysis.ai/models)o1Claude 3.5 Sonnet(Oct.24)GPT-4o(Nov.24)Mistral Large 2(Nov.24)o1-minio3-miniClaude 3.5 H
158、aikuNovaProDeepSeek R1Llama 3.3 70BGPT-4o miniGemini 2.0 FlashComparison of per-token costs across AI platforms 27inform.tmforum.orgCalculating the approximate energy cost of an AI workload requires knowing the cost of energy per token,given that AI services are billed on a per-token,or per million
159、tokens,basis.According to Microsoft Copilot,the average number of tokens used per query currently is roughly 180:30 for the query,and 150 on average for the response.Although AI platform providers define tokens and their pricing differently,one token is generally measured as 4 or 5 text characters.T
160、he commonly accepted amount of energy consumed per response token is 3 to 4 joules.A typical combined query and response uses about 5 joules,which is equal to a small fraction of a kilowatt-hour:0.00000139 kWh.The average price per kWh for the US industrial sector at the time of writing is 8.01 cent
161、s,or$0.0801,according to the U.S.Energy Information Administration.Residential and commercial sectors,meanwhile,reflect roughly 100%and 50%higher rates respectively than those charged to industrial sector customers.We are assuming an AI data center would be classified as industrial,especially given
162、its power requirements,and therefore uses the lower rate.Using a higher rate would result in an energy cost per million tokens greater by the same ratios.At the Industrial rate,current as of the end of December 2024,the energy cost per token is therefore$0.000000111 or$0.11 per million tokens.For ev
163、ery hundred million tokens consumed some users report consumption of anywhere from 10 million to 700 million tokens per month adding power increases the cost by$11.The final section offers recommendations for CSPs considering AI network connectivity opportunities.Calculating the energy costs of tran
164、sporting AI workloads28inform.tmforum.orgmake it happen strategies for monetizing AI connectivitysection 5The Open Digital Architecture(ODA)Production Team is exploring how software agents and agentic AI can address this integration gap by autonomously managing the necessary,inter-domain intent exec
165、ution and negotiation performance between network layers and across vendor and technology domains.Communicate the value of the network To monetize AI connectivity CSPs must do a better job of communicating the real value of the network to customers that are increasingly reliant on AI to run their bu
166、sinesses.“There is,for some reason,a higher value ascribed to compute,memory and disk storage than network,”says Bannon,who insists this must change.“We need to do a better job of communicating how valuable connectivity is and always has been to producing business results with AI and any other techn
167、ology,”he says.Leave legacy behind“Theres always a huge temptation for a new CEO not to want to spend money on what they have,but rather on new revenue streams,”says BTs Bannon.“Telcos got into trouble this way with too many layers of abstraction and never spending to move past old tech debt.”In the
168、 AI market,this must change.AI connectivity requires cloud-style metering,monetization and associated customer experience,plus Level 4 AN to self-optimize resources based on demand.“BT has made the decision to rip the plaster off and get rid of all the old that was holding us back,and reinvent what
169、connectivity and network-as-a-service looks like,”says Bannon.The TM Forum Autonomous Networks Project is helping network operators make AN capabilities like intent-based networking possible in heterogenous networks.Vendors are now implementing intent-based capabilities and interfaces in their produ
170、cts,but often the applications do not interoperate out of the box with those from other vendors.29inform.tmforum.orgCSPs have a unique opportunity to play a significant role in the market for AI connectivity.Their infrastructure data centers,fiber networks,MEC capabilities and AI RANs can support th
171、e demanding requirements of AI applications including their need for low-latency and high-throughput connectivity.But telcos will need to adopt autonomous networks and metered charging and billing to play in the market.Following are some steps to take.To monetize AI connectivity CSPs must do a bette
172、r job of communicating the real value of the network to customers that are increasingly reliant on AI to run their businesses.Make AIs cost and value visibleAI use is highly dynamic as users send queries and workloads to multiple platforms,and the cost of running these workloads is important to any
173、organization that uses AI.“When you run a workload you need visibility to what the different models contribute to the task,”says Snowflakes Rao,so that you can understand the cost/benefit analysis of a particular model.He encourages network operators to add this type of value in their AI connectivit
174、y offerings,“like a nutrition label for how different models handle different things”,as more users need to understand where and when to use various models.Focus on security in AI meteringTM Forums Lupo points out that telcos have an opportunity to implement security specifically for AI workloads,wi
175、th much of the focus on content filtering.“That point of ingress and egress is your AI firewall.That AI firewall will have to know all about metering tokens.It will look at and understand the AI usage,”he explains.Deep packet inspection is done at this stage,and telcos can share that data and reuse
176、it for metering and building the experiences necessary to support AI connectivity offerings.“There will be an AI firewall egress point where you connect to Azure,Google,Llama,”says Lupo,“and thats your Goldilocks zone to look at all the AI information”for purposes of metering and monetizing at any l
177、evel in the AI stack.30inform.tmforum.orgYou have to explore the use cases and the alternatives and know the value proposition you are bringing is best in class.”Chris Penrose,NvidiaNvidias Penrose agrees,adding that telcos not only need to understand the cost and value of what they bring to a custo
178、mers solution,but also to“serve it up in a way that aligns with what the market is charging”.The use of AI applications is scenario-and customer-specific,and connectivity needs to be part of a complete solution.“There is no one price out there you have to explore the use cases and the alternatives a
179、nd know the value proposition you are bringing is best in class,”Penrose advises.Give customers what they wantFrom a developers perspective,telecoms“is always going to fill the gaps”because“my model likely needs access to the outside to get updated and solve new problems”,says Ruben.He sees 5G cover
180、age as key to solving this challenge for developers and AI users“because we are talking about streaming conversations its chattier data than IoT with huge volume”,especially as more industrial systems are connected and automated with AI.Omdias Washburn agrees that focusing on what customers want is
181、critical and says CSPs must recognize that enterprise users have their preferred primary portals for doing their work,often provided by large independent software vendors.The enterprise requirement is to“be in my portal I dont want to use your the telecom operators portal”,he explains.Most hyperscal
182、e cloud providers make their services accessible on other platforms,and telcos should follow suit.tm forum open digital architecture31inform.tmforum.orgThe TM Forum Open Digital Architecture(ODA)provides a migration path from legacy IT systems and processes to modular,cloud-native software orchestra
183、ted using AI.ODA comprises tools,code,knowledge and standards(machine-readable assets,not just documents).It is delivering business value for TM Forum members today,accelerating concept-to-cash,eliminating IT&network costs,and enhancing digital customer experience.Developed by TM Forum member organi
184、zations through our Collaboration Community and Catalyst proofsof concept,ODA is being used by leading service providers and software companies worldwide.ODA includes:An architecture framework,common language,and design principles Open APIs exposing business services Standardized software components
185、 A reference implementation Guides to navigate digital transformation Tools to support the migration from legacy architecture to ODA Maturity models and readiness checks to baseline digital capabilities.Goals of the Open Digital ArchitectureThe aim is to transform business agility(accelerating conce
186、pt-to-cash),enable simpler IT solutions that are easier and cheaper to deploy,integrate and upgrade,and to establish a standardized software model and market which benefits all parties(service providers,their suppliers and systems integrators).TM Forum Open Digital Architecture-A blueprint for intel
187、ligent operationsLearn more about collaborationIf you would like to learn more about the project or how to get involved in the TM Forum Collaboration Community,please contact George Glass.32inform.tmforum.orgtm forum research reports33inform.tmforum.org34inform.tmforum.orgReignitingtelecoms growthsp
188、onsored by:a Playbook for CEOsSeptember 2023|www.tmforum.orgAuthor:Mark Newman,Chief AnalystEditor:Dawn Bushaus,Contributing Editor sponsored by:wholesalechanges:rethinking support systemsfor new fiber operatorsBENCHMARKTM Forum|September 2023telcorevenuegrowth:time foroperators toSeptember 2023|www
189、.tmforum.orgAuthor:Patrick Donegan,Principal Analyst,HardenStanceEditor:Dawn Bushaus,Contributing Editor,TM Forumsponsored by:risk management moves firmly into the telco spotlightCybersecurity strategies:REPORTSponsored by:leveling up:Author:Mark Mortensen,Contributing AnalystEditor:Dawn Bushaus,Con
190、tributing Editor achieving Level 3 autonomous networks and beyondAugust 2023September 2023|www.tmforum.orgAuthor:Dr.Mark H Mortensen,Contributing Analyst,TM ForumEditor:Dawn Bushaus,Contributing Editor,TM Forumsponsored by:REPORTTM Forum|December 2023operators take their generativeAI:REPORTTM Forum|
191、December 2023BSSfor B2Boperators diverge on the path to cloudREPORTTM Forum|December 2023the sustainable telco:navigating the maze of scope REPORTa numbers game:February 2024Sponsored by:Author:Richard Webb,Senior AnalystEditor:Ian Kemp,Managing Editor exploiting data to drive future strategiesFebru
192、ary 2024|www.tmforum.orgsponsored by:Author:Richard Webb,Senior AnalystEditor:Ian Kemp,Managing EditorREPORTclosing the loopFebruary 2024Sponsored by:Author:Dean Ramsay,Principal AnalystEditor:Ian Kemp,Managing Editor CSPs aim to automate service orchestration and assuranceFebruary 2024|www.tmforum.
193、orgsponsored by:Author:Dean Ramsay,Practice Lead,TM ForumEditor:Ian Kemp,Managing EditorREPORTinside thetelco talentrevolutionfinding skills for the future:TM Forum|May 2024April 2024|www.tmforum.orgAuthor:Charlotte Patrick,Contributing AnalystEditor:Dawn Bushaus,Contributing Editorfocus onsponsored
194、 by:driving intelligencein network lifecycleautomation:Asia-Pacificinnovation:the future for telecoms R&DREPORTin search ofTM Forum|July 2024Sponsored by:Author:Ed Finegold,Contributing AnalystEditor:Ian Kemp,Managing Editor July 2024|www.tmforum.orgAuthor:Ed Finegold,Contributing AnalystEditor:Ian
195、Kemp,Managing EditorREPORTcloud adoption acceleratesTM Forum|June 2024Digital Transformation Tracker 8:September 2024|www.tmforum.orginnovating in the Americas:key driversfor growthAuthor:Charlotte Patrick,Contributing AnalystEditor:Ian Kemp,Managing Editorexploring theREGIONAL REPORTREPORTTM Forum|
196、September 2024putting people at the heart of customer experiencehumanizing AI:Telco revenue growth:October 2024|https:/inform.tmforum.orgIs B2B a beacon of light?BENCHMARK REPORTNovember 2024|www.tmforum.orginnovating in Asia:key driversfor growthAuthors:Mark Newman,Chief Analyst Charlotte Patrick,C
197、ontributing AnalystEditor:Ian Kemp,Managing Editorsponsored by:exploring theREGIONAL REPORTOctober 2024|www.tmforum.orgMeetingexpandedexpectationsAuthor:Ed Finegold,Contributing AnalystEditors:Ian Kemp,Managing EditorDawn Bushaus,Contributing Editorsponsored by:for networkchargingDecember 2024|www.t
198、mforum.orgtowardsautonomyAuthor:Richard Webb,Senior AnalystEditor:Ian Kemp,Managing Editorsponsored by:in mobile privatenetworksTREND REPORTAutonomous network operationsDecember 2024|www.tmforum.orgleveraging AIAuthor:Mark Newman,Chief AnalystEditor:Ian Kemp,Managing Editorsponsored by:for service a
199、ndTREND REPORTData&AI Innovationvalue creationAutonomousnetworks:December 2024|www.inform.tmforum.orgAutonomous network operationsBENCHMARK REPORT January 2025|www.tmforum.orgAuthor:Ed Finegold,Contributing AnalystEditor:Dawn Bushaus,Contributing Editorsponsored by:how NaaS isdriving theevolutionof
200、telcooperationsFebruary 2025|www.tmforum.orgAuthor:Richard Webb,Senior AnalystEditor:Ian Kemp,Managing Editorsponsored by:TREND REPORTAutonomous network operationsthe role ofdigital twinsin autonomous networksAuthor:Dean Ramsay,Principal AnalystEditor:Ian Kemp,Managing Editorsponsored by:TREND REPOR
201、TComposable IT&ecosystemsFebruary 2025|www.tmforum.orgfuture BSS:building newcustomer-focusedstrategiesmeet the research&media team35inform.tmforum.orgReport Author:Ed Finegold Contributing AnalystChief Analyst:Mark Newman mnewmantmforum.orgPractice Lead:Dean Ramsay dramsaytmforum.orgHead of Operati
202、ons:Ali Groves agrovestmforum.orgCommercial Manager:Tim Edwards tedwardstmforum.orgReport Editor:Dawn Bushaus Contributing Editor dbushaustmforum.orgManaging Editor:Ian Kemp ikemptmforum.orgEditor in Chief,Inform:Joanne Taaffe jtaaffetmforum.orgGlobal Account Director:Carine Vandevelde cvandeveldetm
203、forum.orgCustomer Success Project Manager:Maureen Adong madongtmforum.orgCustomer Success Project Manager:Amanda Alexander aalexandertmforum.org 2025.The entire contents of this publication are protected by copyright.All rights reserved.The Forum would like to thank the sponsors and advertisers who
204、have enabled the publication of this fully independently researched report.The views and opinions expressed by individual authors and contributors in this publication are provided in the writers personal capacities and are their sole responsibility.Their publication does not imply that they represen
205、t the views or opinions of TM Forum and must neither be regarded as constituting advice on any matter whatsoever,nor be interpreted as such.The reproduction of advertisements and sponsored features in this publication does not in any way imply endorsement by TM Forum of products or services referred
206、 to therein.36inform.tmforum.orgMeet the Research&Media teamPublished by:TM Forum European Offjce 25 Worship St,London EC2A 2DX United Kingdom US Offjce 181 New Road,Suite 304,Parsippany,NJ 07054 USA Phone:+1 862-227-1648 ISBN:978-1-955998-99-4 Report Design:Paul MartinFind out more about TM Forums AI and Data Innovation work and assets