MuReC ‘25 - Multimodal Representation Learning and Clustering

MuReC ‘25 - Multimodal Representation Learning and Clustering

Title

MuReC ‘25 - Multimodal Representation Learning and Clustering

Description

The proliferation of complex and multimodal data—encompassing text, images, graphs, audio, and structured attributes—has introduced significant challenges in extracting informative, robust, and transferable representations. In this context, representation learning, particularly in its unsupervised and self-supervised forms, has emerged as a pivotal approach. When coupled with clustering techniques, it provides powerful tools to structure, compress, and interpret large-scale data by uncovering latent structures and hidden semantics.
This workshop aims to foster progress at the intersection of multimodal representation learning and clustering, bringing together researchers focused on foundational advances and real-world applications involving heterogeneous data modalities and learning paradigms. A particular emphasis is placed on latent factor models, multi-view learning, and self-supervised techniques, especially their integration into advanced deep architectures such as graph neural networks and large language models.

Organizers

• Mohamed Nadif, Université Paris Cité
• Lazhar Labiod, Université Paris Cité

Contact Person

Lazhar Labiod (lazhar.labiod@u-paris.fr)

© 2025 ACM Multimedia Asia Conference. All Rights Reserved.

Code of Conduct