1、GIT:A Generative Image-to-text Transformer for Vision and LanguageJianfeng WangPrinciple ResearcherMicrosoft Cloud&AIA cartoon illustration of a pikachu and a mouse talking to each other.a cell phone screen shows the time of 1:44 wednesday,november 4.A rollpack sign that says$14.88 on itA white back
2、ground with the numbers 49a785 and c392z6.A text that says marlowprocesses are stochastic processes,traditionally in discrete or continuous time,that have the marlow property.A gold and brown sign that says university of colorado 1876.GIT:A Generative Image-to-text TransformerTokenize&EmbedMulti-hea
3、d self-attentionFeed forwardaradioBOS.EOSa tecsun radio withthe time of 12:54.Text decoder(a)Pre-training/captioningImage encoder54tscsunaradio.54tscsun(b)VQAQ:what time is it?A:12:54whatBOSEOSText decoderittimeis?12:5412:54(c)VideoImage encoderImage encoderFrame 6Frame 1temporal embedding 1temporal
4、 embedding 6Jianfeng Wang,Zhengyuan Yang,Xiaowei Hu,Linjie Li,Kevin Lin,Zhe Gan,Zicheng Liu,Ce Liu,Lijuan Wang;GIT:A Generative Image-to-text Transformer for Vision and Language;arxivone image encoder(Florence/CoSwin)+one text decoderpretrain on 0.8 billion image-text pairsRelation with existing app
5、roaches vs Flamingo/CocaRelation with existing approaches Novel object captioning(nocaps)Existing approaches Tags as extra input Object detector/classifier/CLIP Ours No such dependency Scene-text related tasks Existing approaches OCR text as extra input Ours No such dependency vs Flamingo/Coca GIT(o
6、urs):smaller model size/fewer data,better performancemodelDataCOCOnocapsTextVQAVizWiz-QAVATEXYouCook2Flamingo(Deepmind)80B2.3B+27M/video138.1-54.165.484.2118.6Coca(Google)2.1B4.8B143.6120.6-GIT(ours)0.7B0.8B144.8123.459.867.593.8129.8Data&model scaling Data 4M,14M,800M Model CLIP/ViT-B,CLIP/ViT-L,Fl
7、orence/CoSwinResults on image/video captioning/QA New SOTA 12 image/video captioning and question answering tasks*:evaluated on serversPrior SOTA:reported in technical reports or publications.CocaFlamingoFlamingo:84.2Human performance:125.5Results on image classification Method Traditional approache
8、s Vocabulary pre-defined Our proposal Label name as caption Vocabulary-free Any out-of-vocabulary prediction?0.026%Results on scene text recognition Method GIT-TextCaps Same TextCaps model If contains keyword GIT-MJST Scene text as caption Exact match“motion”from MJ/STResults on scene text recogniti
9、on GIT predicted captions Recognizing and describing scene texts GIT predicted captions Diverse entities and conceptsOpen-ended VQAScene text understandingQuestion answeringOpen-ended vocabularyConclusion GIT,a generative image-to-text transformer model 0.7B parameters;0.8B image-text pairs New SOTA on 12 vision-language tasks Surpass human performance on TextCaps for the first time New scheme of classification as generation Vocabulary-free in both training&inference