Start main content

Events

banner
Back

Sep 09, 2024

Joint Seminar (2024-09-09)

School of Biomedical Sciences & the Laboratory of Data Discovery for Health (D24H) cordially invite you to join the following seminar:

Speaker: Professor Jinman Kim
Talk Title: Multi-modal Learning for Biomedical Image Analysis and Visualisation

Date: September 9, 2024 (Monday) 
Time: 2:30 p.m. – 3:30 p.m. (HK Time) 
Venue: Lecture Theatre 2, G/F, William MW Mong Block, 21 Sassoon Road, Pokfulam, Hong Kong
Registration: https://www.d24h.hk/reg 
Host: Professor Joshua Ho (School of Biomedical Sciences)

Biography
.

Professor Kim is a Professor of Computer Science at The University of Sydney. He co-leads the Faculty of Engineering's Digital Health Imaging, which is a pillar of the Digital Science Initiatives (DSI), with the vision to combine the Faculty's expertise in Al applied medical image analysis. He is also the Director of the Telehealth and Technology Center, Nepean Hospital.

Professor Kim's research focuses on the application of machine learning to biomedical image analysis and visualisation. He specialised in cross- model and multi-modal learning, which includes biomedical visual- language representations, image-omics, multi-modal data processing, and biomedical mixed reality technologies. He established and leads the Biomedical Data Analysis and Visualisation (BDAV) Lab at the School of Computer Science. He has produced a number of publications in this field and received multiple competitive grants and scientific recognitions.

Professor Kim is currently a visiting professor at the Centre for Informatics at The University of Geneva, Switzerland. He received his PhD in 2006 and was an Australian Research Council (ARC) Postdoctoral Research Fellow both at The University of Sydney. He is also a Marie Curie Senior Research Fellow at the University of Geneva prior to joining The University of Sydney in 2013 as a faculty member.


Abstract

Medical imaging plays a pivotal role in patient management in modern healthcare, with most patients who are treated in hospitals undergoing imaging procedures. These technologies can visualise anatomy and function in virtually every organ system in the body in intricate detail. There are numerous medical imaging modalities available, with a variety of complexity and sophistication - from plain digital chest X-rays to simultaneous functional and anatomical imaging with positron emission tomography (PET) and computed tomography (CT) imaging (PET-CT). The challenge now is how to maximize the extraction of meaningful information from the images and present meaningful information to the users. There needs to be strategies to harness knowledge from vast image datasets and complementary sources like image sequences, text reports, and genomics. Fortunately, the era of artificial intelligence (AI) is fuelling the growth of smart decision support and analysis tools for medical image analysis. Despite rapid advancements in integrating AI algorithms into clinical decision support systems, we are still in the nascent stages of the AI revolution in medical imaging. In this seminar, we will present our research on cross-modal learning to integrate imaging and complementary data for disease modelling, analysis and visualization, covering projects in language-guided image analysis, 3D volume generation from 2D images, surgical mixed reality, sequential PET-CT synthesis for image enhancement, and biological imaging data.


ALL ARE WELCOME
Should you have any enquiries, please contact Ms Karen Ha at kykha@d24h.hk