
The program at a glance is for your reference. The final program will be provided in July, 2025.
September 20, 2025 |
|
---|---|
11:00-18:00 |
Registration |
September 21, 2025 |
|
---|---|
09:00-09:20 |
Opening Ceremony |
09:20-11:40 |
Keynote Speech |
12:00-14:00 |
Photography & Lunch Time |
14:00-17:30 |
CoST Parallel Sessions |
September 22, 2025 |
|
---|---|
09:00-12:00 |
CoST Parallel Sessions |
12:00-14:00 |
Lunch Time |
14:00-17:30 |
CoST Parallel Sessions |
Yong-Jin Liu, Professor, Tsinghua University, China
Title:Two Case Studies on AI+VR Technology for Chinese Traditional Culture Dissemination
Abstract: Intangible cultural heritage embodies the crystallization of human wisdom, carrying rich historical, artistic, and scientific value. The advancement of artificial intelligence, human-computer interaction, and virtual reality technologies has brought new opportunities for cultural heritage preservation. In this talk, I will present two recent works utilizing intelligent human-computer interaction technologies for promoting intangible cultural heritage skills. In the first work, focusing on the art of iron flower casting, we developed hardware capable of providing airflow and thermal feedback based on real-time detection of heat sources in virtual reality scenes using the YOLO object recognition algorithm, achieving intelligent feedback according to heat source recognition results. In the second work, regarding the craftsmanship of Hanfu (traditional Chinese clothing), we developed VisHanfu, an interactive learning system based on virtual reality. We conducted high-precision three-dimensional modeling of multiple representative Hanfu artifacts from different dynasties and provided a Hanfu-making experience. By integrating cloth simulation algorithms with motion capture data, we animated 3D avatar wearing Hanfu in dance performances, enabling users to intuitively understand the non-rigid motion effects of the created Hanfu model. Both works show the immense potential of intelligent human-computer interaction technologies in the preservation and dissemination of intangible cultural heritage.
Bio:Yong-Jin Liu is a tenured Full Professor with the Computer Science Department at Tsinghua University. He obtained his Ph.D. degree in 2004 from Hong Kong University of Science & Technology. His research interests include intelligent media processing, computer graphics, computer vision and applied machine intelligence. He is the Director of the Institute of Human-Computer Interaction and Media Integration, Tsinghua University, and the Director of the Intelligent Graphics Committee of the Chinese Society of Image and Graphics (CSIG). He was granted the National Outstanding Youth Fund (2018-2022), and the National Outstanding Youth Fund (2014-2016). He was also selected into the New Century Talent Program of the Ministry of Education, China (2011). In recent five years, he published more than 100 papers in PAMI/TOG/SIGGRAPH/TIP/TAFFC/TVCG/CVPR/AAAI/CVPR etc, including more than 60 SCI papers and 7 papers were selected as ESI highly cited/hot papers. He received more than 10 best paper awards at outstanding journals and conferences, including twice the Best Paper Award of the International Consortium of Chinese Mathematicians (ICCM). He also received the second prize of the National Technology Invention Award in 2011.
Huiyuan Fu, Professor, Beijing University of Posts and Telecommunications, China
Title:Key Technologies and Applications for Image and Video Quality Enhancement and Evaluation
Abstract:With the rapid development of digital media technology, images and videos play an increasingly important role in information dissemination, entertainment, security, and other fields. However, due to limitations of camera performance, channel interference, compression algorithms, and other factors, the quality of images and videos is often compromised to varying degrees, such as blurriness, noise, and color distortion. This work explores key technologies and applications for image and video quality enhancement and evaluation. For enhancement, a deep learning-based super-resolution method is proposed to restore high-frequency details and improve image resolution. A spatiotemporal joint algorithm is designed for video enhancement, effectively reducing noise and blur while maintaining consistency. For evaluation, a no-reference image quality assessment model is built based on human visual system characteristics, accurately quantifying quality without reference images. Experiments show significant improvements in both objective metrics and subjective visual effects compared to existing methods. These technologies are applied in medical imaging, surveillance video enhancement, and multimedia content evaluation, demonstrating practical value and supporting relevant technological advancements.
Bio:Huiyuan Fu received the Ph.D. degree in computer science from the Beijing University of Posts and Telecommunications, Beijing, China, in 2014. He is currently a Professor with the School of Computer Science, Beijing University of Posts and Telecommunications. He has authored more than 60 papers in these fields. His research interests include visual Big Data, machine learning and pattern recognition, multimedia systems, etc. Dr. Fu was the recipient of the Best Student Paper Award at IEEE ICME in 2016.