《ABI Research:2024企業級生成式AI的機遇與挑戰評估白皮書(英文版)(23頁).pdf》由會員分享,可在線閱讀,更多相關《ABI Research:2024企業級生成式AI的機遇與挑戰評估白皮書(英文版)(23頁).pdf(23頁珍藏版)》請在三個皮匠報告上搜索。
1、ASSESSING ENTERPRISE GENERATIVE AI OPPORTUNITIES AND CHALLENGESReece Hayden,Senior AnalystINTRODUCTION ChatGPTs release in 2022 is often compared to the“iPhone or smart-phone moment”for Artificial Intelligence(AI).This technology is now more accessible than ever for consumers and enterprises.Surroun
2、ded by a quickly developing Machine Learning(ML)ecosystem,foundation mod-els have revolutionized the speed and cost of deploying generative AI in enterprise applications.This offers significant opportunities,including cost cutting,process automation,and even revenue creation through the development
3、of new/augmented services.ABI Research expects that gen-erative AI will add in excess of US$400 billion in value across enterprise verticals by 2030.However,opportunities may abound,but over the last 2 years,enterprise deployment has not accelerated as quickly as many would have expected.Demand and
4、supply side challenges are both to blame.This whitepaper explores and addresses these market frictions.TABLE OF CONTENTSINTRODUCTION .1ENTERPRISE PERSPECTIVE.2Identifying Enterprise Generative AI Opportunity.2Adoption Challenges.2Assessing Technology Maturity for Enterprise Deployment.5Enterprise De
5、ployment Strategies.6Expectations for Enterprise Generative AI.7State of Enterprise Generative AI Today.8Overcoming Enterprise Problems.10How Do Open,Small Models Stack up to Enterprise Priorities?.12SUPPLY SIDE PERSPECTIVE.15Understanding the AI Supply Ecosystem.15Supply Side Challenges.18State of
6、and Expectations for Generative AI Regulation.18CONCLUSIONASSESSING ENTERPRISE GENERATIVE AI OPPORTUNITIES AND CHALLENGESENTERPRISE PERSPECTIVEThe generative AI market continues to move at breakneck speed,but enterprise adoption remains in an early stage.Enterprises are facing challenges around brin
7、ging“safe”use cases to market.The most troubling use cases are customer facing,with the majority of scaled,produc-tion-ready use cases still internally facing.IDENTIFYING ENTERPRISE GENERATIVE AI OPPORTUNITYAlthough enterprise adoption remains in an early stage,huge opportunities abound.Generative A
8、I offers possibilities to cut costs,create new revenue streams,and automate existing processes.Figure 1 provides an overview of these opportunities.Figure 1:Enterprise Generative AI Opportunities(Source:ABI Research)ADOPTION CHALLENGESA range of enterprises are deploying generative AI across a varie
9、ty of different use cases,both internally and externally.However,the majority of the market is struggling to move from Proofs of Concept(PoCs)to production at scale.Underpinning this market friction are the risks that generative AI brings to enterprises.While executives are willing to develop PoCs t
10、o demonstrate the potential of the technology,they remain hesitant to undertake“real-world”deployments at a larger scale due to the technologys relative immaturity and the associated risks involved.The result of this has been“isolated”point deployments or long-life PoCs without a clear timeline for
11、enterprise-wide scale.The following section highlights the range of risks and challenges associ-ated with generative AI deployment at scale.Lower product lifecyle time to market&costsBuild new customer-facing products&servicesSupport employee productivity with co-pilotsEXTERNALINTERNALStreamline&imp
12、rove customer-facing processesDeploying customer-facing products requires reliable AIAutomate internal processesUnderstand customer lifecycle to improve retention&lower ASSESSING ENTERPRISE GENERATIVE AI OPPORTUNITIES AND CHALLENGESBUSINESS RISKS&CHALLENGES Talent:Enterprises were caught off guard b
13、y the availability of new generative AI models.Few had planned for appropriate upskilling or hiring to accommodate deployment and usage.This means that enterprises are suffering from a significant skills gap that is hindering their ability to cost-effectively deploy,manage,and scale generative model
14、s.It is not just about training Foundation Models(FMs),as this will often be done by third parties,but the skillset to inte-grate,optimize,fine-tune,monitor,and manage is significant and can present a huge deploy-ment barrier.Cost:Training,fine-tuning,running,and managing generative AI models at sca
15、le is expensive.This is especially true when relying on cloud computing environments and running workloads across many Graphics Processing Units(GPUs)per day.Although generative AI has a clear Return on Investment(ROI),fixed and variable costs will be high,creating a significant barrier to developme
16、nt for enterprises.AI Management at Scale:Industry commentators expect that as enterprise generative AI adoption scales,Small Language Models(SLMs)will be deployed across business units to support specific applications.This will mean enterprises are deploying tens,if not hundreds of different AI mod
17、els.Each model needs training,deployment,monitoring,optimization,fine-tuning,application de-bugging,data management,and a range of other ML processes.With-out automation,this will be labor intensive and create significant management headaches.Structure:Maximizing the value created through generative
18、 AI adoption requires integration across every viable process within the enterprise.This involves significant Operational Change Management(OCM)to address processes,systems,and operational structures.As most were caught off guard by the“early”availability of generative AI,they are still going throug
19、h restruc-turing to adapt to this technology.Expect transformation of existing processes,hiring policies,internal governance,upskilling,and more areas.Control&Ownership:Internal enterprise regulation and governance requires greater control over Intellectual Property(IP)and customer data.This has hin
20、dered generative AI deployment,as third-party models often do not make it clear how user prompts or data are stored and utilized.This challenge extends to data sovereignty and the requirement to keep customer data within regional or national borders.A second challenge hinges on who owns AI output da
21、ta.A third challenge comes from the risk of training Large Language Models(LLMs)with copyrighted data and legal problems that continue to surface across the market.Power Consumption:Even compared to“traditional AI,”generative AI models require signifi-cantly more compute power for both training and
22、inference given their size.Increasingly,as enterprises scale generative AI inference,enterprise data center energy demand will create challenges,especially around sustainability.Lack of Globally Harmonized Policies/Best Practices for Generative AI Deployment:Regulatory responses to generative AIs co
23、mmercial readiness have been fragmented.Some regions like the United States are relying on self-regulation,while others like the European Union(EU),are enforcing stricter regulation to mitigate potentially negative externalities associated with AI at scale.This will create regulatory risks for enter
24、prise deployment,especially for multinational ASSESSING ENTERPRISE GENERATIVE AI OPPORTUNITIES AND CHALLENGES Geopolitical Tensions:Increasingly,AI hardware(and software)is playing a large role in ongoing geopolitical disputes.For example,the United States has banned shipments of high-performance ch
25、ipsets to China.This creates instability for enterprises looking to develop and deploy AI across regions.TECHNOLOGY CHALLENGES Transparency and Explainability:Often referred to as“black boxes,”closed-sourced LLMs do not let users see the underlying source code or weights.This means that end users ca
26、nnot“explain”why certain prompts create outputs.This exposes the user to significant risks and means that in the enterprise setting,developers cannot troubleshoot,alter,or change model weights to ensure accurate outputs.Trustworthiness:Hallucinations are the primary risk in deployment.Numerous high-
27、profile cases have shown the potential commercial and reputational problems that could result from incorrect answers being generated.These can result from bias,incorrect/insufficient training data,incorrect assumptions made by the model,or even end-user manipulation without appropriate AI guardrails
28、.A real-world example of hallucination occurred when ChatGPT fabricated quotes and non-existent court cases were included with a ChatGPT-generated legal brief.Reliability:Mission-critical use cases rely on low latency and high availability.However,as models scale,resources will need to perform more
29、compute operations,which could create bottlenecks with challenges around availability.For example,a public-facing chatbot may not be able to scale to handle queries by hundreds or thousands of clients at once.Availability challenges are underpinned by a scarcity of compute resources,especially given
30、 the supply chain challenges around GPUs.Data:As the foundational element of generative AI deployment,data present numerous asso-ciated challenges.The first is the availability of curated datasets that can be utilized for training and fine-tuning.The second concerns data sovereignty,security,and Int
31、ellectual Property(IP),which are major challenges,especially when using third-party AI applications like ChatGPT.This has led to many high-profile enterprise bans of third-party systems.The third is an ambiguity around the usage of third-party data for Foundational Models(FMs).And the fourth centers
32、 around customer data and the understandable objections of using these data for model training.Off-the-ShelfModels:Even industry-leading generative AI models offer lower than 70%accuracy in most cases.This means that although pre-trained models do speed up Time to Value(TTV),deploying generative AI
33、is still time-,resource-,and talent-intensive,given the ML Operations(MLOps)needed to achieve“acceptable”accuracy.This has further created a bottleneck of talent within the industry,which is slowing deployment.Accuracy is just one metric used to benchmark generative AI models;on top of this,enterpri
34、ses generally measure how models handle complex reasoning and questions(e.g.,GLUE,SuperGLUE),and accuracy of training ASSESSING ENTERPRISE GENERATIVE AI OPPORTUNITIES AND CHALLENGESASSESSING TECHNOLOGY MATURITY FOR ENTERPRISE DEPLOYMENTAlthough LLMs bring significant capabilities,they are not ready
35、for immediate deployment in enterprise applications.The reasons are numerous,especially in terms of hallucination,accuracy,performance,contextualization,and computational resource usage.All of these factors contribute to a slowdown in performance for enterprise generative AI.Figure 2 provides a brea
36、kdown of the structure through which enterprises take“pre-trained”LLMs and deploy them within the ML pipeline.Figure 2:Process of Enterprise LLM Deployment(Source:ABI Research)Within this process,enterprises face plenty of significant challenges.Training Data:Data are often siloed across business un
37、its,making centralized model fine-tuning challenging and time consuming.Enterprises will need to go through a process of data restructuring to build effective data fabrics that will be the foundational element of AI training and deployment.This is further complicated by the risks and challenges of u
38、sing private customer data for model training.One of the major concerns for customers is that companies will potentially leak their IP to competitors by utilizing their data for model training.Pre-trained LLM or LVMFine-tune model with enterprise dataOptimize model for hardware using compression tec
39、hniques Test&experimentationScaling AI in productionITERATIVE PROCESS CAN TAKE 6+MONTHSTAKES OVER 1 YEAR Optimization:Pre-trained foundation models are large and general;enterprise applications are narrow and subsequently can operate effectively with a fraction of parameters.Optimization takes place
40、 to ensure that the LLM is efficiently deployed to maximize accuracy,while reducing resource utilization.This process is not new;however,LLMs used for generative AI are far more complex and intricate,making traditional techniques like quantization more challenging.Fine-Tuning:This is the process of
41、adapting a pre-trained LLM to specific tasks or knowledge.This process requires developers to update parameters by retraining the foundation model on specific datasets.Specifically,the LLM is retrained using input and output pairs to repre-sent desired behaviors.The goal is to increase the accuracy
42、of outputs for specific subject matters or types of behaviors.For example,for a medical chatbot,fine-tuning can optimize outputs of a general pre-trained LLM for specific terminology and subject matter.Fine-tuning is time consuming,as it requires data preparation and supervised learning through whic
43、h a developer rejects or accepts outputs to tune model responses.It is also costly,as this relies on GPUs to accelerate computing.Gap between Model Developers and System Engineers:Another bottleneck in AI de-ployment is the gap between different processes.Often,AI developers build and test models wi
44、thout paying any attention to the real-world deployment environment,which creates chal-lenges when these models are actually brought into production.This will lengthen actual TTV for generative AI models in the ASSESSING ENTERPRISE GENERATIVE AI OPPORTUNITIES AND CHALLENGESENTERPRISE DEPLOYMENT STRA
45、TEGIESEnterprise generative AI deployment will certainly take time,given the enterprise and technological challenges being faced.However,as enterprises assess their strategic approach to generative AI adoption,several different options are certainly open.Table 1 explores four deployment strategies;h
46、owever,as we move forward and the supply side builds out more“enterprise-ready”services,ABI Research expects more opportunities to emerge.Table 1:Strategies for Enterprise Generative AI Adoption(Source:ABI Research)Deployment StrategyExampleExplanationPositivesNegativesApplication Programming Interf
47、ace(API)serviceChatGPTAccess managed,third-party AI model through API One click deployment.No management requirements.Simple integration through API into applica-tions.Limited control over versioning and product.Lack of transparency.“Black box”without control over weights.Limited control over data.S
48、ecurity for confidential company information.Third-party managed serviceSystem Integrator(SI),consultantsBuilds,deploys,and manages AI model or application Requires no AI expertise.Management/monitoring is handled externally.Limited day-to-day control or oversight.High cost for service&compute resou
49、rces/cloud.In-house developed applicationLeverage open-source or licensed models to build AI applications.Complete control over AI development process.Control over data&deployment location.High talent requirement and cost involved to acquire talent.Very high TTV(1+years).Third-party inference platfo
50、rm or frameworkNVIDIA Inference Microservice,OctoStack,Intel AI Tiber PlatformFrameworks that enable developers to build and deploy optimized open-source or pre-trained models“anywhere.”Complete control over deployment process.Control over data.Tools,software,and pre-optimized models available to su
51、pport process.Access to pre-optimized models.Low barriers to deployment.Reliant on third-party framework.Limited to certain software/tools.Often limited to certain hardware types.High cost compute resources/cloud.API services like ChatGPT have dominated early point deployments.These tools are being
52、used horizontally across business groups for simple processes like market research or search.However,enterprises are increasingly moving beyond APIs to build applications using in-house expertise or third-party partners as they look to develop an effective long-term generative AI strategy.Third-part
53、y managed services will characterize the next wave of enterprise adoption given the talent shortage and TTV considerations.In the medium to long term,this development will slowly shift in-house with more enterprises leveraging third-party platforms to support AI development and application ASSESSING
54、 ENTERPRISE GENERATIVE AI OPPORTUNITIES AND CHALLENGESEXPECTATIONS FOR ENTERPRISE GENERATIVE AIAlthough opportunities abound,enterprise generative AI remains commercially nascent.Currently,the market is stuck in PoCs with some internal employee augmentation.Although creat-ing value,ABI Research beli
55、eves that internal use cases are just the beginning;the majority of value will be created through customer-facing use cases built on new products/services and process automation.Accessing these“high-risk”use cases requires both an enterprise-strategic overhaul and significant supply-side innovation
56、to ensure highly reliable results for these mission-critical use cases.Figure 3 provides an overview of the enterprise generative AI use case timeline.Figure 3:Enterprise Generative AI Use Case Timeline(Source:ABI Research)Of course,enterprise generative AI deployment is not a“one-size-fits-all”appr
57、oach.Many enterprises already have in-house capabilities and strategies to support deploymentas well as in-house scalable resources that can support inference at scale.This means that certain enterprises will be faster to scale out generative AI services.Figure 4 provides a breakdown of different si
58、zed enterprises and how they will approach this opportunity.Figure 4:How Enterprise Size Impacts Generative AI Adoption(Source:ABI Research)New Products&Services Employee AugmentationAutomation&Optimization Tailored,open fine-tuned models Hundreds of models deployed from device-to-cloud to support i
59、ndividual use cases Owned and fine-tuned by enterprise Low TTV,high ROIHigh Risk:High-ValueLimited OversightMAJORITY OF THE MARKET IS HEREStartupsQuick to embed applications/tools to build new products and support employee productivity.Strategy:Quickly looking to deploy LLM-based applications from c
60、losed-source APIs to augment employee productivity and create new services.Approach:Best suited to deploy applications based on closed-source APIs,but risk of soaring costs.Risks:Lack Resources to Optimize:Struggle to build“contextualized”models,given resource,capital,and human capital constraints.S
61、MEsSome exploring,but most encountering high cost,security,skill barriers to entryStrategy:Leveraging low/no-code platforms(like Amazon Bedrock)to build applications or engaging enterprise service providers.Approach:Slow to deploy,as security concerns,cost,and operational complexity created signific
62、ant barriers to entry.Some in exploratory and identification phase.Risks:Poor Utilization:Too slow to adopt,given high barriers to adoption,losing competitive opportunity.Limited Value Extraction:Majority of usage will be restricted to“low-hanging”use cases like content generation.Multinational Corp
63、orationsPartnering with mainstream vendors to deploy PoCs for tools/servicesStrategy:Partnering with“closed-source”vendors to build proprietary fine-tuned applications with enterprise data.Approach:Focused on data privacy,so deployment will be on-premises,in a“walled garden”or in a Virtual Private C
64、loud(VPC).Risk averse with bans on API-based applications like ChatGPT.Risks:Fragmentation:Without clear corporate governance,isolated deployment instances could lead to long-term internal friction between departments.Vendor lock-in for early deployment with closed-source vendors could hinder ASSESS
65、ING ENTERPRISE GENERATIVE AI OPPORTUNITIES AND CHALLENGESThere are other considerations when it comes to understanding when and how enterprises will actually be deploying generative AI:Type of Industry:Integrating generative AI features in digital native verticals will be simpler,given the skillset,
66、data availability,enterprise structure,processes,and mindset.However,building generative AI into traditional verticals like manufacturing and the supply chain will be more challenging.Early successes have been found by companies like Klarnawithout siloed datasets and digital operations.In contrast,t
67、elcos have struggled to deploy effectively,given the legacy systems and datasets with which they are working.Type of“Personas”:As generative AI scales across industries and use cases,the AI experts available to support deployment will be in short supply.This will lead to different“personas”with vary
68、ing expertise to start building and deploying AI models.For example,Multinational Corporations(MNCs)likely have strong AI specialists who can enable deployment at scale,whereas it is highly unlikely that startups in the manufacturing world will have the same skillsets available.Different size enterp
69、rises across different verticals will not all have an equal distribution of talent,which will impact how and when they are able to seize the AI opportunity.STATE OF ENTERPRISE GENERATIVE AI TODAYClearly,the enterprise generative AI market remains in an early stage.Chart 1 and Table 2 explore the val
70、ue that ABI Research forecasts each enterprise vertical to add as a result of deploying generative AI.By the end of the forecast period,ABI Research expects that retail/e-commerce and marketing will be the biggest winners in generative AI deployment.Chart 1 demonstrates that relatively limited value
71、 has been created as a result of generative AI deployments.Chart 1:Enterprise Value Creation from Generative AI World Markets:2023 to 2030(Source:ABI Research)2.6 State of Enterprise Generative AI Today Clearly,the enterprise generative AI market remains in an early stage.Chart 1 demonstrates that r
72、elatively limited value has been created as a result of generative AI deployments.Chart 1:Enterprise Value Creation from Generative AI(Source:ABI Research)This slow growth until 2027 is a result of a couple of factors:Deployments Targeting Low-Value Use Cases:Given the risks involved with enterprise
73、 AI use cases,production-ready scaled applications have,so far,been constrained to low-value use cases with a high degree of human oversight.This is impeding potential value creation.Technology Still Taking Time to Mature before Deployment at Scale:LLMs even with fine-tuning do not provide sufficien
74、t accuracy to be deployed in high-risk use cases.Enterprises Still Developing Strategies and Technical Foundation for Generative AI Adoption:ChatGPTs emergence shocked the industry,and enterprises were not ready with talent or the strategic processes to implement generative AI effectively at scale.M
75、any companies are still building out internal structures and capabilities to enable effective generative AI use case adoption.But generative AI use cases are being deployed today.Figure 5 provides an overview of certain“low-hanging”use cases.These have been deployed across numerous industries and ar
76、e already creating value.05010015020025030035040045050020232024202520262027202820292030Value Added by Generative AI(US$Billions)PharmaceuticalsLawRetail&E-commerceAutomotiveHealthcareFinancial ServicesTelecomsManufacturingMarketing,Advertising&CreativeEntertainment&MultimediaEnergy,Utilities,and Min
77、ingEASSESSING ENTERPRISE GENERATIVE AI OPPORTUNITIES AND CHALLENGESTable 2:Enterprise Value Creation from Generative AI World Markets:2024,2027,and 2030(Source:ABI Research)This slow growth until 2027 is a result of a couple of factors:DeploymentsTargetingLow-ValueUseCases:Given the risks involved w
78、ith enterprise AI use cases,production-ready scaled applications have,so far,been constrained to low-value use cases with a high degree of human oversight.This is impeding potential value creation.Technology Still Taking Time to Mature before Deployment at Scale:Even with fine-tuning,LLMs do not pro
79、vide sufficient accuracy to be deployed in high-risk use cases.Enterprises Still Developing Strategies and Technical Foundation for Generative AI Adoption:ChatGPTs emergence shocked the industry,and enterprises were not ready with talent or the strategic processes to implement generative AI effectiv
80、ely at scale.Many companies are still building out internal structures and capabilities to enable effective generative AI use case adoption.But generative AI use cases are being deployed today.Figure 5 provides an overview of certain“low-hanging”use cases.These have been deployed across numerous ind
81、ustries and are already creating value.Enterprise VerticalValue202420272030Retail&e-Commerce(US$Millions)2,52737,903142,121Marketing,Advertising&Creative(US$Millions)17,17256,811108,261Financial Services(US$Millions)7,63239,38979,392Energy,Utilities,and Mining(US$Millions)3,420 6,15230,274Pharmaceut
82、icals(US$Millions)2134,48519,951Law(US$Millions)1,2724,73418,044Entertainment&Multimedia(US$Millions)2,3126,67814,414Automotive(US$Millions)1,3284,84813,437Manufacturing(US$Millions)5811,7936,746Education(US$Millions)2415851,119Telecoms(US$Millions)30102586Healthcare(US$Millions)ASSESSING ENTERPRISE
83、 GENERATIVE AI OPPORTUNITIES AND CHALLENGESFigure 5:Low-Hanging Use Cases Being Deployed Today(Source:ABI Research)Another factor to consider in enterprise generative AI deployment is location.AI training and inferencing has mostly been deployed in the public cloud due to the size of the model and e
84、conomic advantages.However,as enterprise AI deployment strategies mature,many will consider moving toward a private cloud,or even closer to the enterprise with on-premises servers,edge,or even on-device.Each of these different locations has benefits and challenges:Public Cloud:Traditional location t
85、o deploy AI,given resource requirements and beneficial commercial models.However,security,soaring cloud costs,risk of lock-in,uptime require-ments,and capacity constraints are increasingly leading to fewer deployments,but remain dominant.Private Cloud:Providing dedicated resources ensuring that data
86、 privacy,regulatory,uptime,and customizability are achieved for the enterprise.However,these have much higher startup expenditure requirements and lack easy scalability for enterprises.A lack of scalability may impact the ability for enterprises to service growing numbers of customers.OVERCOMING ENT
87、ERPRISE PROBLEMSEnterprise challenges will not go away overnight,but emerging trends in generative AI will certainly help reduce risk and accelerate enterprise deployment.Critically,many of these developments will help build enterprise trust,which will accelerate deployments.Emergence of SLMs:LLMs w
88、ith hundreds of billion,if not trillions,of parameters may lead the market,but they also bring resource,cost,and latency challenges for enterprise deployment.SLMs may be more appropriate.They have a much lower parametric memory,offering a similar degree of accuracy for specific tasks,with a much mor
89、e competitive economic proposition for model inference.Co-PilotDescription:Assistant that chats in natural language and can perform various tasks autonomously such as email generation or summarization.Example:Microsoft CopilotIndustries Deployed:Cross verticalChatbotDescription:Deployed internally o
90、r externally focused,chatbots are used to answer questions in natural language.Example:IBM Watson AssistantIndustries Deployed:Cross verticalContent GenerationDescription:Tools to support generating new visual content for advertisements or films.Example:OpenAI SoraIndustries Deployed:Media&entertain
91、ment,marketinguserid:145584,docid:525880,date:2024-09-27,ASSESSING ENTERPRISE GENERATIVE AI OPPORTUNITIES AND CHALLENGES Data and AI Platform Convergence:Siloed and unstructured datasets continue to hold enterprise generative AI back.Data platforms like Databricks and Snowflake are critical industry
92、 players that can support effective enterprise data strategies.Data platforms are incorporating AI capabilities through Research and Development(R&D)and,more importantly,third-party partnerships.This development will help make enterprise data AI-ready,lowering technical barriers to deployment.Open-S
93、ourceFoundationalModelswithCompetitivePerformance:Open-source FMs continue to dominate R&D headlines with players like Mistral,Meta,Deci,and others showcasing market-leading benchmarks.Increasingly,open-source FM R&D coupled with MLOps innovation will spur growth in enterprise generative AI adoption
94、,as they offer a much better long-term commercial and technical profile.Centralized Platform to Support Company Governance/Policy/Process:As regulatory conditions become more fragmented with significant regional variation,enterprises operating across multiple regions will require platforms to ensure
95、 that company generative AI deployments align effectively with legal restrictions/requirements in each area.These regulations may control which models can be used,how customer/enterprise data are utilized/stored,and the power consumption of AI models.Synthetic Data:Synthetic data are generated by AI
96、 models trained on real-world data samples.This technique enables enterprises to utilize a small amount of real-world data to create extensive structured datasets for model training or fine-tuning.This technique helps Independent Software Vendors(ISVs)or enterprises avoid using sensitive internal,cu
97、stomer,or public data.However,synthetic data are not a perfect substitute for real-world generated data,as they may not reflect real-world nuances within data.Low/No Code&Visual AI Platforms:Given that different personas are expected to deploy generative AI,from business analysts to software enginee
98、rs,software development and ML skills will increasingly be one of the key bottlenecks for AI deployment.Low/no-code platforms that include visual AI tools will be vital in reducing the coding and other hard skills necessary to support generative AI deployment.Deep Model Optimization Platforms:Cost,p
99、erformance,and accuracy are some of the key challenges constraining enterprise deployment.Removing these bottlenecks from ML processes requires simple and easy-to-use optimization tools that can support developers quickly with deploying new,highly-efficient models.Traditional optimization techniques
100、 like quantization and sparsification have struggled to reduce LLM size effectively,while managing to maintain performance,given the intricacy of LLMs.New optimization tools like Neural Architectural Search(NAS)are being developed to support the creation of efficient Deep Neural Networks(DNNs).NAS i
101、s the automated search for DNNs with hardware,application,and data awareness.Although NAS is commercially immature,as it remains time consuming and costly,the resultant network provides much better performance than similar“off-the-shelf”models.Increasingly,expect to see NAS being integrated across s
102、olutions.Software vendors like Deci and ETA Compute are using it commercially,while Qualcomm is building an optimized model zoo for its cloud processor with NASASSESSING ENTERPRISE GENERATIVE AI OPPORTUNITIES AND CHALLENGES Guardrails:Quickly maturing in the generative AI space,guardrails are being
103、deployed to reduce risk in enterprise applications.Crucial to governance and control,guardrails can help reduce bias,eliminate hallucinations,and ensure that human oversight is retained on specific applications.For example,instead of enabling a chatbot to answer every question from customers,specifi
104、c questions will lead to responses from humans.This can mitigate a few of the risks around hallucinations.Retrieval Augmented Generation(RAG):RAG is a technique for enhancing the accuracy and reliability of generative AI models with facts fetched from external sources.This has already been widely de
105、ployed in industry-centered applications.RAG also reduces the reliance on“trained memory”supporting SLMs with external databases.However,RAG has yet to show effective accuracy with models not breaking 80%.This has led to companies like NVIDIA complementing it with tools like“re-ranking,”which priori
106、tizes retrieved information to improve accuracy.HOW DO OPEN,SMALL MODELS STACK UP TO ENTERPRISE PRIORITIES?As the enterprise market matures,large,closed models like OpenAIs GPT-3.5 will not always align with Key Performance Indicators(KPIs)required for deployment,given performance,transparency,cost,
107、and other considerations.Increasingly,enterprises will explore alternatives to align generative AI deployment with application requirements.WILL MODEL SIZE MAKE A SIGNIFICANT IMPACT ON ENTERPRISES?R&D in generative AI originally focused on building bigger and more accurate models;however,as enterpri
108、ses look to deploy,other considerations are coming into play,which is leading to significant investment in open-source,small models or SLMs.Notably,Meta,Mistral,and others have released models here.Figure 6 provides an overview of investment in these types of models over the last year.Figure 6:R&D I
109、nvestment in Large and Small Generative AI Models(Source:ABI Research)LARGE MODELSSMALL MODELS2023NOWFebruary 2023:Samba-Nova releases Samba-1,a 1 trillion COE modelMarch 2023:OpenAI releases GPT-4,with roughly 1.8 trillion parametersJuly 2023:Anthropic releases Claude 2,via an API,estimated to be o
110、ver 130 billion parametersFebruary 2024:Mistral Large is released in partnership with Azure,model can only be accessed via an APIFebruary 2023:Meta releases LLaMA across 65,33,13,7B parametersMarch 2023:Baidu release Ernie Bot based on models with parameters from 1.75 trillion to 430 millionJune 202
111、3:Microsoft releases open-source Orca 13B fine-tuned from LLaMAOctober 2023:Microsoft releases Orca 2 with 13B fine-tuned from LLaMA 2,comparable performance to GPT-3.5July 2023:Meta releases LLaMA 2 across 70,13,7B parametersSeptember 2023:Mistral releases 7B model with performance like GPT-3.5May
112、2023:TechnologyInnovationInstitutereleasedopen-sourceFalcon 40BNovember 2023:01.AI releaseslatest version of Yi with 6B,34B parametersMay 2024:OpenAI releases GPT-4,Omni with multi-modal ASSESSING ENTERPRISE GENERATIVE AI OPPORTUNITIES AND CHALLENGESBut how do these different models stack up against
113、 different enterprise KPIs?Table 3:Assessing Different Generative AI Models across Enterprise KPIs(Source:ABI Research)The current thinking is that both large and small models are necessary and that the use case or required outcome will determine the size of model deployed.Large models offer the acc
114、uracy guarantees that many enterprises are looking for in deployment;however,the model economics do not work for certain use cases.Subsequently,for narrow applications like text classification,small models will be more widely deployed.SMALL LANGUAGE MODELS WILL OPEN UP EDGE AND ON-DEVICE GENERATIVE
115、AISLMs will also enable deployment of generative AI across the distributed continuum.LLMs,like GPT-3.5,have huge memory and resource requirements for inference.However,SLMs with fewer than 15 billion parameters can be deployed in more resource-constrained domains like edge servers,smartphones,cars,o
116、r Personal Computers(PCs).This is creating sustained growth in generative AI inference outside of the data center.Moving AI inference out of the cloud/data center to the edge has clear benefits for enterprises.Costs can be optimized by reducing reliance on cloud and networking;reliability can be imp
117、roved by ensuring reduced reliance on a network for data transfer;and data privacy can be optimized by minimizing the amount of user data that are moved to the cloud.Subsequently,a multitude of use cases are available across verticals:Manufacturing:AI-generated workflow instructions and chatbot;remo
118、te assistance for repairs;staff training;conversational interface with machines.Healthcare:Conversational interface with patient monitoring system;remote diagnostics;generated patient chatbot;history summarization.Professional Services:Personal co-pilots and digital assistants.Logistics&Transportati
119、on:Intelligent route mapping;supply chain tracking;driver/vehicle natural language chatbot.Key Performance IndicatorLarge Model(over 15bn parameters)Small Model(under 15bn parameters)UseGeneral models that can support variety of applications,e.g.,generalized chatbot or co-pilot.Support specific appl
120、ications with narrow focus,e.g.,text classification.AccuracyShown significantly higher accuracy across various tasks.Lower accuracy,but can be improved through fine-tuning,etc.PerformanceOften higher latency.Lower latency.CostHigh cost to train,fine-tune,and run inference.Much lower costs.Transparen
121、cyLimited,as often deployed through an API.Higher transparency with open-sourcing.DeploymentConstrained to cloud due to memory and energy consumption.Can be deployed across locations from cloud to ASSESSING ENTERPRISE GENERATIVE AI OPPORTUNITIES AND CHALLENGESIMPACT OF OPEN AND CLOSED MODELS FOR ENT
122、ERPRISESOne of the key choices that enterprises need to make when deploying generative AI applications is open versus closed source.Open-source models,which make the weights,source code,and usage license freely available,have shown rapidly improving performance and are increasingly competitive with
123、closed-source models.In addition,open-source models benefit from a much faster pace of innovation,as anyone can make changes and improvements to the underlying source code.Table 4 provides an assessment of open-and closed-source models from the enterprise perspective.Table 4:Evaluating Open-and Clos
124、ed-Source Models(Source:ABI Research)As the enterprise market remains immature,the majority of enterprises will still rely on closed-source models.These offer better economics during PoCs,better TTV,and lower commercial risk.However,as the market develops and enterprises begin to develop a more nuan
125、ced approach supported by talent acquisition,governance,and third-party partnerships,expect an increasing number of enterprises that blend open-and closed-source models depending on the use case and a variety of other considerations.OpenClosedOpportunitiesChallengesOpportunitiesChallengesEnterprises
126、 can customize/fine tune utilizing their own data.External innovation can support improved performance.Lower inference cost.Low vendor lock-in.More scalable for enterprise use cases.Ecosystem is rapidly expanding.Emerging open-source security frameworks that can be embedded alongside model/applicati
127、ons.Deployed in own environment reducing data risk.Requires in-house developmental expertise or third-party support,which can be prohibitive for early stage or Small and Medium Enterprises(SMEs).High deployment cost.Often requires on-premises servers or infrastructure to run models.Heavily reliant o
128、n fine-tuning for optimized performance.No third-party support or centralized governance/regulation.Market leading performance.Security frameworks embedded.Low deployment cost.Ease of access without any internal skills needed.APIs can be consumed in a flexible model,ensuring enterprises of all sizes
129、 can access.Low TTV and upfront investment cost.Support from third party customer service.Risk of vendor lock-in for users.High inference cost due to API and egress fees.Lacks explainability,transparency,and observability.Models can be changed or altered without your permission,which may contribute
130、to worse performance.Data may be at risk when stored in third-party data centers.Lack of control over where a model is ASSESSING ENTERPRISE GENERATIVE AI OPPORTUNITIES AND CHALLENGESSUPPLY SIDE PERSPECTIVEUNDERSTANDING THE AI SUPPLY ECOSYSTEMThe supply chain is huge and expanding every day.Figure 7
131、provides a snapshot of the companies operating in each link.Figure 7:Companies in the Generative AI Supply Chain(Source:ABI Research)ORIGINAL EQUIPMENT MANUFACTURERSIncreasingly,OEMs are moving upward through the value chain to support AI monetization at the software layer.Companies like Dell are ad
132、ding to their hardware(e.g.,servers and Personal Computers(PCs)to provide enterprise-ready generative AI tools(e.g.,Dell APEX).These plat-forms are aimed at enabling enterprise developers to build applications/services.However,the key business model for these OEMs within the enterprise market will s
133、till be hardwareif that is servers for cloud or data centers or devices for enterprise customers.AI CHIP VENDORSThe AI chip market is clearly dominated by NVIDIA,as its market-leading accelerators and CUDA framework support the vast majority of training workloads for generative AI models.However,the
134、 inference market is far more competitive with a range of incumbents(i.e.,Qualcomm,AMD,and Intel)and challengers(developing solutions with RISC-V architecture).Increasingly,the on-device AI market with productivity-focused applications is taking center stage,driven by demand for secure generative AI
135、 applications closer to the end user.Although chipset innovation AI Chip VendorsFoundation Model DevelopersMLOps ProvidersApplication DevelopersEnterprise ServicesCloud ServicesOEMASSESSING ENTERPRISE GENERATIVE AI OPPORTUNITIES AND CHALLENGEScontinues,the market is seeing a slowdown,meaning that co
136、mpetition is shifting to the software layer.Chip vendors will look to invest heavily in their software capabilities(e.g.,optimization tools,Software Development Kits(SDKs),and application developer environments),as this will help drive competitive differentiation in a fragmented AI inference chip ma
137、rket.CLOUD SERVICESTraditionally focused on providing compute services within cloud environments,increasingly these stakeholders are looking to move upward to build AI products and tools.Investing heavily in AI R&D and commercial productization does bring an element of monetization,but the pri-mary
138、focus is to increase traffic within their cloud environments.FOUNDATION MODEL DEVELOPERSSince the public release of GPT-3 in 2022,an enormous number of competing closed-and open-sourced models have been released,contributing to high growth in this market.However,new entrants will struggle to enter t
139、he market given the high barriers(hardware costs,training ex-pertise,and data access).Innovation continues to target both leading-edge LLMs with improved accuracy and capabilities,and trailing edge,more commercially viable language models.The key challenge for these vendors is developing an effectiv
140、e commercial strategy.R&D and running costs for FMs are exceedingly high and currently rely on third-party capital injections.Expect stakeholders to build plenty of partnerships across the supply chain to start creating effective channels to market.MLOPS PROVIDERSThe most active node within the supp
141、ly chain,plenty of different tools and services exist to support ML development,deployment,operations,and model management in the enterprise domain:Data Services:Tools that enable data scientists to perform operations to build,monitor,and manage datasets that are used in ML processes like training o
142、r fine-tuning.These platforms are the core component of any AI process.Optimization:Improving the performance or efficiency of generative AI models/applications through techniques like quantization,sparsification,pruning,and fine-tuning.Integration:Support integration of LLM-based applications and s
143、ervices into enterprise processes.AI Security Services:Development or deployment platform to enable the implementation of content safety,guardrails,and other security services to ensure that generative AI solutions do not impact data privacy,security,or IP.Regulation:Ability to embed rules into gene
144、rative AI models.Evaluation:Assess and compare generative AI model performance in development and production.Monitoring:Platforms that provide insight into the performance of models across ASSESSING ENTERPRISE GENERATIVE AI OPPORTUNITIES AND CHALLENGESOften,ML platforms enable developers to perform
145、multiple MLOps tasks across different AI frameworks(computer vision,graph-based,and generative AI).Increasingly,the market will see some consolidation with key players acquiring numerous Large Language Model Operations(LLMOps)capabilities to support multiple different operations.Moreover,as generati
146、ve AI scales,more personas without AI expertise will be required to deploy and monitor models,which will create long-term demand for tools offering using visual,low-code,and no-code capabilities.APPLICATION DEVELOPERSThe LLM-based application ecosystem continues to expand with three broad categories
147、 emerg-ing:1)User Interfaces(UIs)for FMs(e.g.,chatGPT);2)application plug-ins based on public LLMs these simply access chatGPT or similar generic chatbots through APIs and deliver an application Over-the-Top(OTT);and 3)full stack applications built on fine-tuned LLMs that provide pre-trained LLMs an
148、d are then refined using use case-or enterprise-specific datasets.Heavy Business-to-Business(B2B)enterprise applicability exists even with mission-critical applications.Application type 1 and type 2 have already become concentrated,given the limited skill required for deployment;however,type 3(which
149、 will unlock B2B value)requires more time and effort.Expect more activity from type 3 over the next 6 months,as application developers look to use increasingly competitive open-source models to build fine-tuned and full-stack applications.ENTERPRISE SERVICESGiven the pervading skills gap,enterprises
150、 will look to outsource the deployment of AI to third parties to speed TTV.Increasingly,business consultants and SIs are developing end-to-end enterprise services.These services are leveraging partnerships with“AI experts.”Table 4:Enterprise Service Provider Partnerships(Source:ABI Research)One can
151、see why there is a huge amount of activity in this node of the supply chainenterprises lack the skills,internal processes,and strategic understanding to effectively deploy generative AI across business processes;while consultants and integrators can build on top of their core competencies in OCM,pro
152、cess transformation,training,and technology integration to capitalize on this lucrative opportunity.Vendors within the supply chain are leveraging core AI competencies to build enterprise services;for example,Deci(a company supporting deep model optimization)is working with enterprises to optimize t
153、heir models for specific applications/deployment environments.Business ConsultantsSystem IntegratorsBain&OpenAITata Consultancy Service&Google CloudBCG&IntelWipro&Google CloudPWC&Harvey AICognizant&Google CloudDeloitte&Google Cloud,NVIDIA,AWS BedrockCap Gemini&Google CloudKPMG&Microsoft,Google Cloud
154、Accenture&Scale AIASSESSING ENTERPRISE GENERATIVE AI OPPORTUNITIES AND CHALLENGESSUPPLY SIDE CHALLENGESAs the enterprise generative AI market continues to develop,supply side stakeholders have opportunities to capture the growing market.However,vendors face significant challenges:Rising Costs with F
155、ew Mature Revenue Streams and Decreasing Funding:Consumer and enterprise usage of generative AI applications like ChatGPT has increased quickly over the last 2 years;however,the majority of users remain“free.”Running inference on LLMs requires heavy processing capabilities with access to scalable re
156、sources,quickly increasing variable costs,especially due to cloud reliance.As the enterprise market is nascent,vendors have yet to build mature revenue streams to balance out these costs.Many are turning to partnerships with SIs or hyperscalers to build channels to the enterprise market,but this rem
157、ains in the preliminary stages and will take time to mature.For now,AI leaders are highly reliant on internal subsidization or external capital funding from investors.Fragmented Supply Side&Growing Competition across Each Node:Players within the compute sector are building AI or partnering with lead
158、ers to develop enterprise-ready commercial solutions,which ranges from OEMs to ISVs and most players in between.This is because the availability of FMs has lowered supply side barriers.This will create significant competition for the supply side with short/mid-term fragmentation.Growing Number of Le
159、gal Challenges:High-profile suits have been levied against AI leaders.Getty Images accused Stability AI of copyright infringement,while The New York Times recently sued OpenAI and Microsoft over the use of copyrighted work for training LLMs.These legal challenges will create additional risk and expe
160、nse throughout the R&D process.Furthermore,the question of which party is liable for model output remains undecided.Model developers like OpenAI have been blamed for bias outputs from chatbots.This could contribute to significant liability challenges down the road.Open-SourceandClosed-SourceDebate:A
161、s the supply side looks to blend R&D and commercialization,vendors are struggling to decide if pursuing an open-or closed-source strategy makes sense.Meta is potentially the biggest proponent of open source,while the remainder of the market seems split between contributing to the development of open
162、-source and building closed-source models with the intention of driving commercialization.A prominent example is Mistral,which has open-sourced the weights for its Mistral 7B model,while keeping Mistral Large and others closed.Meshing these two approaches will be difficult moving forward.STATE OF AN
163、D EXPECTATIONS FOR GENERATIVE AI REGULATIONLike enterprises,governments and regulatory bodies were caught off guard by the emergence of generative AI.This has resulted in governments playing catchup,trying to define their regulatory approaches as the generative AI landscape quickly emerges.WHERE SHO
164、ULD REGULATION FOCUS?From ABI Researchs perspective,there are key areas that AI regulation should focus ASSESSING ENTERPRISE GENERATIVE AI OPPORTUNITIES AND CHALLENGESFigure 8:Where Should Regulation Focus?(Source:ABI Research)Regulation,if built effectively,can be used to spur innovation targeting“
165、responsible AI”deployment.This will be hugely beneficial for the enterprise market,as it will drive transparency,security,and effective deployment.This means targeting regulation at both the supply and demand sides.Supply side regulations should focus on ensuring that vendors are using data appropri
166、ately to develop transparent models,applications,and services.Examples of potential supply side regulations are:WatermarksforAI-GeneratedContent:Watermarks embedded with AI-generated con-tent,enabling content consumers to receive greater transparency around when AI has been used.This helps with iden
167、tification,traceability,copyright infringement,misinformation identification,and integrity verification.Prompt Data Storage and Usage Guidelines/Transparency:As AI is more widely used across enterprises,regulation should be introduced to determine how the supply side can utilize/store prompts and pe
168、rsonal information from end users.Ensuring that data storage is transparent will help scale adoption.This should include informing end users of how prompt data are utilized,transparent data retention policies,data storage policies,data anonymization,third-party data sharing,and data usage reports,am
169、ong other regulatory considerations.Regulatory Oversight of Model Training Data:Data play a critical role in the development of effective AI models.Regulators could play a role in ensuring that data are accurate and lack bias.Model and Weight Transparency for Regulators:Making the workings,decision-
170、making processes,and limitations of AI models clear and understandable to users,stakeholders,and regulators.It is crucial for building trust,ensuring accountability,and enabling ethical use of AI technologies.Increasing LLM standards testing frameworks are being explored by both private and public s
171、takeholders.These frameworks can be used to provide reassurance to enterprises about how LLMs understand prompts and create outputs.This helps determine the level of risk enterprises are exposing themselves to by understanding accuracy,performance,training data,hallucination,bias,and data leakage,am
172、ong other factors.MODELTransparencyRegulatory review of data usage,structure,weights,and othersBiasAssessment of model output accuracy and fairness.Watermarks&CitationsTracking AI-generated contentRegistrationCentral control of any AI models subject to testsDATASovereigntyAlignment with data soverei
173、gnty requirements by keeping user data and prompts localCustomer DataTransparency around usage of customer data with controls enabling users to opt out of data trainingIntellectual PropertyClear guidance around which data can and cannot be legally used for model trainingAPPLICATIONEnergy UsageLimits
174、 imposed on energy that can be consumed to run AI training&inferenceHuman OversightRisk-based framework that ensures human oversight for certain appsUse CasesSpecific regulation targeting vertical use case adoption of generative AILabor ReplacementWorkforce replacement risk from generative AIASSESSI
175、NG ENTERPRISE GENERATIVE AI OPPORTUNITIES AND CHALLENGESA couple of open standard frameworks for transparency and observability have emerged,and are likely to have a significant impact on enterprise LLM adoption:Infocomm Media Development Authoritys(IMDA)AI Verify:Singapore has implemented an AI gov
176、ernance testing framework and software kit in consultation with private stakeholders.Run within the enterprise environment,this is a single integrated toolkit that enables enterprises to conduct technical tests on the AI models.However,this solution cannot evaluate LLMs.The toolkit will provide a fr
177、amework that is consistent with internationally recognized AI governance principles.WhyLabs LangKit:Open-source text metrics toolkit for monitoring and regulating language models.It is an observability and safety platform that enables detection of risks and safety issues in both open-source and prop
178、rietary LLMs(including toxic language,sensitive data leakage,and hallucinations).This can help enterprises/users understand the risks involved in LLM deployment by understanding why the LLM creates continuations.U.K.and U.S.Governments:Entered into an AI safety agreement to work together on developi
179、ng methods for evaluating the safety of AI model and systems.As regulation is imposed,standard testing frameworks and toolkits will be a necessary component to support enterprise AI deployment at scale.Without these simple test frameworks,enterprises will still struggle to understand legal and ethic
180、al risks with generative AI deployment.However,these frameworks are currently in an early stage.Moving forward,an international framework to assess LLMs should be the goal.REGIONAL APPROACHESSince ChatGPTs release,regional governments have scrambled to develop an effective regulatory response to thi
181、s emerging technology,and numerous different approaches have been followed.Ranging from pro-innovation policies with limited regulation to strict rules or risk-based approaches,it is unclear how to address the inherent risks of the technology.Figure 9 provides a breakdown of different regional appro
182、aches to generative AI.Figure 9:Different Regional Regulatory Approaches to AI(Source:ABI Research)MINIMAL RULESRESTRICTIVE RULESChinaUnited KingdomJapanBalancing social control&innovation support.It has enforced guidelines,including security reviews,registration,generated content labels,data disclo
183、sure,model transparency.Any model must be centrally approved prior to public release.Building a contextual sector-based regulatory framework with limited direct governance.Reliant on sector self-regulation to create a pro-innovation environment.No direct regulation imposed,limited soft rules aiming
184、to maximize societal&economic benefits from AI.EUEU AI Act provides a risk based regulatory framework that aims to provide developers/deployers with clear requirements and obligations regarding usage of AI.This framework splits regulation between minimal,limited,high,and unacceptable risk.United Sta
185、tesRemains reliant on self-governance,however,some interest in central controls covering safety tests,trustworthy standards,risk mitigation,partnered innovation,and technical biases(amongst other issues).Collaborating on AI Safety FASSESSING ENTERPRISE GENERATIVE AI OPPORTUNITIES AND CHALLENGESAI re
186、gulations will have a significant impact on enterprise generative AI.The United States pushesforself-regulation,which is likely to have a positive impact on technological innovation,pushing the United States further to the forefront of the generative AI market.However,given the adoption risks,this a
187、pproach could slow enterprise adoption,as challenges around trustworthiness are not centrally addressed with AI regulation/governance.China relies ongovernmentcontrol,which,as many expect,will have a mixed impact on innovation.On one hand,it will limit open R&D,while on the other hand,it will provid
188、e funding to spur cutting-edge innovation for specific companies.Europelookstoprotectenduserswithregulations,which will significantly hinder R&D for new AI systems,especially cutting-edge FM innovation.But this may have a positive impact on enterprise adoption.The EU AI Act has already been criticiz
189、ed widely,as it is viewed as“anti-innovation.”But there are challenges in building an effective regulatory approach for AI,which are outlined below:Balancing Risk and Reward:Generative AI brings substantial economic and technical risks,but also commercial opportunities.Developing a regulatory approa
190、ch that balances these effectively is challenging,as strict regulations could stifle innovation,while a more laissez-faire approach could create unnecessary risks.ABI Research expects that AI regulation will have a significant impact on geo-economic growth moving forward,similar to the impact of com
191、puting.Building Cooperation and Alignment:Although there are consortiums and bodies targeting international cooperation,national self interest is driving toward localized regulation.Notable exceptions are the AI Safety Institute Consortium(AISIC)and the EDSAFE AI Alliance.Building international coop
192、eration around meaningful AI policy is necessary to eliminate the geopolitical risks associated with AI development.Recently,the United kingdom entered into an agreement with the United States to work together on developing methods to evaluate the safety of AI tools and systems.This is a first bilat
193、eral agreement,but is unlikely to lead to direct regulation.Speed of Innovation:Generative AI is shifting from R&D to commercial deployments across use cases and end markets.FMs have developed quickly with new multi-modal models capable of generating content across modalities.Keeping pace with this
194、innovation is challenging and implementing flexible rules-based policies to ensure alignment with safety standards,while not inhibiting innovation,will be ASSESSING ENTERPRISE GENERATIVE AI OPPORTUNITIES AND CHALLENGESCONCLUSIONEnterprise generative AI remains in an early stage,but there are plenty
195、of opportunities for different verticals.Enterprises are already implementing solutions that augment employees on a day-to-day basis;however,this is just the beginning.More opportunities exist across different verticals to create new revenue streams,cut costs,and automate various processes,which ent
196、erprises are struggling to access due to significant barriers on both the demand and supply sides.The primary challenge for enterprises is risk exposure;generative AI applications bring substantial risks such as cost,bias,talent,data security,energy consumption,hallucination,and availability,which c
197、ontribute to long-life,internal PoCs and slowing the transition to scaled deployments.Reducing or limiting enterprise generative AI risk must be the priority for both the demand and supply sides as the market looks to accelerate.Regulation has a key role to play,as it can limit some of the inherent
198、risks involved in generative AI deployment.Stakeholders must work cooperatively to build international consensus and deploy fair,effective,pro-innovation regulatory frameworks that encourage enterprise generative AI deployment.Published June 2024157 Columbus Avenue,4th FloorNew York,NY 10023+1.516.6
199、24.2500We Empower Technology Innovation and Strategic Implementation.ABI Research is uniquely positioned at the intersection of end-market companies and technology solution providers,serving as the bridge that seamlessly connects these two segments by driving successful technology implementations an
200、d deliver-ing strategies that are proven to attract and retain customers.2024 ABI Research.Used by permission.ABI Research is an independent producer of market analysis and insight and this ABI Research product is the result of objective research by ABI Research staff at the time of data collection.The opinions of ABI Research or its analysts on any subject are continually revised based on the most current da葧璦卥扜蒗煶鵯蒍敢鱒蒏蒕祒鹖癏鱥跿斐罣葾罥癏斐猰葓啓罾癏卾葼v罎豭葺豭鹺蒍镵敢荑焰荖癜睞Y驎葶鹶嚍敻羋豭荺衏葛鑶敬灑葒虢鱥讖葎慓豎虛鱥蒖膗塎蒏鱥讖遒觿虎