Multi-Modal Pre-Training for Multimedia Understanding

ACM International Conference on Multimedia Retrieval (ICMR) Workshop

16 - 19 November 2021, Taipei, Taiwan, China

Introduction

Representation learning has always been a key challenge in many tasks of multi-modality domain, such as image-text retrieval, visual question and answering, video localization, speech recognition, ect. Along the history of multi-modality learning, we find that model initialization is one of the most important factors. For example, research of weight initialization set a foundation of neural network based methods and image features pre-trained with Visual Genome Dataset build a new standard setting for many vision-language models. Recently, multi-modal pre-training is a new paradigm of model initialization that establishes the state-of-the-art performances for many multimedia tasks. Pretraining models outperforms traditional methods by providing stronger representation of different modalities learned in an unsupervised training way. Multi-modal pre-training is an interesting topic and has attracted rapidly growing interests in many fields and the intersection of these them, including computing vision, natural language processing, speech recognition, etc. With the continuous effort of many works, we also find that the cost time can be even decreased to 10 hours on 2 Titan RTXs for a vision-language pre-training model in very recent works. Although the emerging trend of multi-modal pre-training models, it remains unexplored in many aspects. For example, studies of standard settings for fair comparison of different multi-modal pre-training models will benefit the research community. More discussion about the efficiency of sub-modules and pre-training tasks will also help us to have more thorough knowledge about pre-training mechanism. Exploration of improving training efficiency is also worth tackling.


Call For Paper

The goals of this workshop are to (1) investigate research opportunities of multi-modal model initialization, especially on multi-modal pre-training, (2) solicit novel methodologies of multi-modal pre-training, (3) explore and discuss the advantage and possibilities of pre-training for more multimedia tasks. We expect contributions concerning multimodality model initialization and multi-modal pre-training, involving image, language, video, speech, etc.

The topics of interest include (but not limited to):

  • Multi-modal self-supervised learning
  • Multi-modal pre-training task
  • Multi-modal pre-training optimization
  • Multi-modal representation learning
  • Multi-modal model optimization
  • Multi-modal model initialization
  • Cross-modality retrieval
  • Lightweight multi-modal pre-training
  • Multi-modality alignment and parsing
  • Advanced multi-modal applications
  • Benchmark datasets and novel evaluation methods


Important Dates

  • April 20, 2021 April 25, 2021

    Deadline for Workshop Paper Submission.

  • May 20, 2021 May 15, 2021

    Acceptance Notification of Workshop Papers.

  • May 30, 2021 June 20, 2021

    Camera-ready date for Workshop Papers.


Paper Submission

Paper Format

All papers must be formatted according to the ACM proceedings style. Click on the link to access Latex and Word templates for this format. Please use "sample-sigconf.tex" as a Latex template or "ACM_SigConf.doc" as a Word template.


Lenght Of The Paper

We invite the following two types of papers:

Full Paper: limited to 8 pages, including all text, figures, and references: Full Papers should describe original contents with evaluations. They will be reviewed by more than two experts based on:
  1. Originality of the content
  2. Quality of the content based on evaluation
  3. Relevance to the theme
  4. Clarity of the written presentation

Short Paper: limited to 4 pages, including all text, figures, and references. Short papers should describe work in-progress as position papers. They will be reviewed by two experts based on:
  1. Originality of the content
  2. Relevance to the theme
  3. Clarity of the written presentation

Submission Website

Submissions should be made through here.


Camera-ready Instruction

The preparation instructions are readied for authors' final submissions. Please review: http://www.scomminc.com/pp/acmsig/mmpt.htm (for authors & speakers' submission types and specific deadlines).

The MMPT'21 recorded presentation video instructions: www.scomminc.com/pp/acmsig/MMPT-present-video.htm


Program

To be announced.


Organizers


Bei Liu

Microsoft Research Asia

bei.liu@microsoft.com

Jianlong Fu

Microsoft Research Asia

jianf@microsoft.com

Shizhe Chen

INRIA

shizhe.chen@inria.fr

Qin Jin

Renmin University of China

qjin@ruc.edu.cn

Alexander Hauptmann

Carnegie Mellon University

alex@cs.cmu.edu

Yong Rui

Lenovo Group

yongrui@lenovo.com