Finding Structural Knowledge in Multimodal-BERT

Research output: Chapter in Book/Report/Conference proceedingArticle in proceedingsResearchpeer-review

Documents

  • Fulltext

    Final published version, 835 KB, PDF document

  • Victor Milewski
  • Miryam de Lhoneux
  • Marie-Francine Moens
In this work, we investigate the knowledge learned in the embeddings of multimodal-BERT models. More specifically, we probe their capabilities of storing the grammatical structure of linguistic data and the structure learned over objects in visual data. To reach that goal, we first make the inherent structure of language and visuals explicit by a dependency parse of the sentences that describe the image and by the dependencies between the object regions in the image, respectively. We call this explicit visual structure the scene tree, that is based on the dependency tree of the language description. Extensive probing experiments show that the multimodal-BERT models do not encode these scene trees.
Original languageEnglish
Title of host publicationProceedings of the 60th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)
PublisherAssociation for Computational Linguistics
Publication date2022
Pages5658–5671
DOIs
Publication statusPublished - 2022
Event 60th Annual Meeting of the Association for Computational Linguistics - Dublin, Ireland
Duration: 23 May 202225 May 2022

Conference

Conference 60th Annual Meeting of the Association for Computational Linguistics
LandIreland
ByDublin
Periode23/05/202225/05/2022

ID: 323621674