top of page

19 AUG 2024 (MON) 14:30-15:30

Departmental Research Seminars Series

Multimodal Unsupervised Domain Adaptation for Remote Sensing Image Semantic Segmentation


Date: 19 AUG 2024 (Monday)

Time: 14:30-15:30 (HKT)

Venue: CLL, Department of Geography, 10F, The Jockey Club Tower, Centennial Campus, HKU


 

Abstract:

Semantic segmentation of remote sensing data is one of the most important tasks in geoscience research. The goal is to classify surface objects based on remote sensing data. Driven by the rapidly growing remote sensing devices and platforms, the amount of remote sensing data has grown exponentially over the past few decades, which provides the field with a wealth of multisource and multimodal data. This presentation delves into multimodal fusion and unsupervised domain adaptation. The former focuses on extracting information from various data modalities, while the latter leverages existing data and labels from one source domain to tackle segmentation tasks in a new target domain. In particular, we will introduce some multimodal fusion methods based on CNN and Transformer, and also introduce some strategies for aligning source and target domains during knowledge transfer. After that, we would like to discuss how multimodal information can be applied to domain adaptation tasks to address the challenges of large-scale datasets without labels.


Mr. Xianping Ma

Ph.D. student, The Chinese University of Hong Kong, Shenzhen, China

Mr. Xianping Ma received the bachelor’s degree in geographical information science from Wuhan University, China, in 2019. He is currently pursuing the Ph.D. degree with The Chinese University of Hong Kong, Shenzhen, China. His research interests include remote sensing image processing, deep learning, and multimodal learning and unsupervised domain adaptation.

 


Comments


bottom of page