Prof. Lei Meng
Prof. Lei Meng

Home

孟雷116x160.jpg

Prof. Lei Meng

Shandong University, China


Title:

Cross-modal Inference and Heterogeneous Information Fusion for Open-domain Visual Understanding



Abstract:

Visual understanding aims to build a machine learning model that is able to recognize objects, scenes, and events from visual media, i.e. images and videos. Although deep learning has achieved remarkable performance in vertical domains, such as face recognition and traffic detection, open-domain visual understanding is still challenging. This is mainly due to that the regression capability of the neural models cannot scale up to the increasing visual patterns in the large-scale training corpus. In this talk, I will present the recent progress of our Multimedia Mining, Reasoning, and Creation (MMRC) Lab on using the multimodal descriptors of visual media for open-domain visual understanding tasks, ranging from image classification, product recommendation, video classification, to 2D/3D visual synthesis.


 

Biography:

Lei Meng, Distinguished Professor and Doctoral Supervisor of Qilu Young Scholars , has been working at the School of Software, Shandong University since 2020. He received a bachelor's degree in engineering from Shandong University in 2010, and a doctorate degree from the School of Computer Science and Engineering, Nanyang Technological University (NTU), Singapore in 2015, under the supervision of Professor Ah-Hwee Tan. In 2015, he worked at the Joint NTU-UBC Research Center of Excellence in Active Living for the Elderly (LILY) as Research Fellow, co- tutors are Professor Miao Chunyan of Nanyang Technological University and Professor Cyril Leung of University of British Columbia. In 2018, he worked at NUS-Tsinghua-Southampton Centre for Extreme Search (NExT++) as Senior Research Fellow, co- tutor is Professor Tat-Seng Chua of National University of Singapore.

Focusing on scientific issues such as multimedia computing and data mining driven by Internet big data, he has long been engaged in machine learning theory and technology research on multimedia knowledge mining and content representation. Carry out research on key technologies of smart home for health big data analysis, independently build tens of millions of dietary health big data, develop and apply aging-friendly search engine, non-disturbing risk research and judgment, healthy diet management and other application systems; Strategic needs, focus on digital perception and intelligent decision-making in multi-scale social governance scenarios, and carry out pioneering innovative research in multimedia understanding, cross-modal reasoning, digital twin, and metaverse . The main research topics include (1) self-organizing clustering algorithm based on adaptive resonance principle (ART) ; (2) image representation algorithm based on cross-modal enhancement; (3) deep learning method for imbalanced data; (4) Cross-modal causal inference method combined with knowledge graph; (5) Image and 3D scene generation based on multi-source data.