《利用 Ibexa 和 Symfony 中的小型 AI 和機器學習模型.pdf》由會員分享,可在線閱讀,更多相關《利用 Ibexa 和 Symfony 中的小型 AI 和機器學習模型.pdf(28頁珍藏版)》請在三個皮匠報告上搜索。
1、Machine Learning Inference in PHP by exampleLeveraging FFI,ONNX and TransformersWho is familiar with Transformers?What is a Transformer?A neural network architecture that revolutionized NLP and beyondIntroduced in 2017 paper Attention is All You NeedPowers models like GPT,BERT,and T5A way of handlin
2、g a specific task on top of a model.More efficiently.Key Innovation:Processes all input at once instead of sequentiallyEncoder-decoder structureSelf-attention layersFeed-forward neural networksPosition embeddingsBeforeSequential processing(RNNs,LSTMs)Limited context windowSlow training and inference
3、AfterParallel processing&lowering cost of training&inferenceExtended context understandingFaster training and inferenceBetter at capturing relationshipsReduced resources usage2017Demo:https:/huggingface.co/spaces/webml-community/attention-visualizationLearn more:https:/ implement 3 use-casesin PHPTe
4、xt ClassificationDemophp bin/console app:score B07VGRJDFYImage ClassificationDemohttps:/ generationDemohttps:/ to explain the magic.Time to explain the magic.PHP FFIForeign Function InterfaceFFI extension allows PHP code to directly call functions and manipulate data from C libraries without writing
5、 additional C code or PHP extensions.Open Neural Network ExchangeONNX for shortONNX(Open Neural Network Exchange)is anopen standard format that allows AI models to be shared between different machine learning frameworks like PyTorch,TensorFlow,and many others.Transformers&pipelinesProvided by the Tr
6、ansformersPHPpackage.Courtesy of Kyrian ObikweluAnd 7 contributorsSupported pipelinesModelsInfinite possibilities Hundreds of models readily available on.Convert any model to ONNX with:PythonNotebookSmolLMPromising efficient LLM model Provided by the HF team 538Mb only Great alternative to fully fle
7、dged models Can even run on the browserBiggest limitation right now:No GPU support.Yet.My recommendationWhile this is great for small tasks(classification,image tagging,etc.),any LLM work should be delegated to GPU based infrastructure.Your LLM options for nowUse a SaaS service:OpenAI,Claude API,etc.Or use HF Inference endpoints to deploy the model of your choiceAnd query that API endpoint from your app.Or“Build your own”,at your own risk!Thank you!Guillaume MoigneuVP,Advocacy Upsunguillaumeplatform.shnls.io on bluesky