Representation learning has always been a key challenge in many tasks of multi-modality domain, such as image-text retrieval, visual question and answering, video localization, speech recognition, ect. Along the history of multi-modality learning, we find that model initialization is one of the most important factors. For example, research of weight initialization set a foundation of neural network based methods and image features pre-trained with Visual Genome Dataset build a new standard setting for many vision-language models. Recently, multi-modal pre-training is a new paradigm of model initialization that establishes the state-of-the-art performances for many multimedia tasks. Pretraining models outperforms traditional methods by providing stronger representation of different modalities learned in an unsupervised training way. Multi-modal pre-training is an interesting topic and has attracted rapidly growing interests in many fields and the intersection of these them, including computing vision, natural language processing, speech recognition, etc. With the continuous effort of many works, we also find that the cost time can be even decreased to 10 hours on 2 Titan RTXs for a vision-language pre-training model in very recent works. Although the emerging trend of multi-modal pre-training models, it remains unexplored in many aspects. For example, studies of standard settings for fair comparison of different multi-modal pre-training models will benefit the research community. More discussion about the efficiency of sub-modules and pre-training tasks will also help us to have more thorough knowledge about pre-training mechanism. Exploration of improving training efficiency is also worth tackling.
The goals of this workshop are to (1) investigate research opportunities of multi-modal model initialization, especially on multi-modal pre-training, (2) solicit novel methodologies of multi-modal pre-training, (3) explore and discuss the advantage and possibilities of pre-training for more multimedia tasks. We expect contributions concerning multimodality model initialization and multi-modal pre-training, involving image, language, video, speech, etc.
The topics of interest include (but not limited to):
Deadline for Workshop Paper Submission.
Acceptance Notification of Workshop Papers.
Camera-ready date for Workshop Papers.
All papers must be formatted according to the ACM proceedings style. Click on the link to access Latex and Word templates for this format. Please use "sample-sigconf.tex" as a Latex template or "ACM_SigConf.doc" as a Word template.
We invite the following two types of papers:
Full Paper: limited to 8 pages, including all text, figures, and references: Full Papers should describe original contents with evaluations. They will be reviewed by more than two experts based on:
1. Originality of the content
2. Quality of the content based on evaluation
3. Relevance to the theme
4. Clarity of the written presentation
Short Paper: limited to 4 pages, including all text, figures, and references. Short papers should describe work in-progress as position papers. They will be reviewed by two experts based on:
1. Originality of the content
2. Relevance to the theme
3. Clarity of the written presentation
Submissions should be made through here.
The preparation instructions are readied for authors' final submissions. Please review: http://www.scomminc.com/pp/acmsig/mmpt.htm (for authors & speakers' submission types and specific deadlines).
The MMPT'21 recorded presentation video instructions: www.scomminc.com/pp/acmsig/MMPT-present-video.htm
To be announced.