I am a master’s student at Harbin Institute of Technology (Shenzhen), under the supervision of Prof. Shuhan Qi. I am now a visiting student at Hong Kong University of Science and Technology, working with Prof. Long Chen.

🤔 Research Interests

  • Multimodal: Multimodal Large Language Models
  • Efficient Methods: Efficient Training and Inference, Continual Learning, Model Compression
I am enthusiastic about minimalist, efficient, and practical methods.

🔥 News

  • Honored to join LONG Group at HKUST as a visiting student, working with Prof. Long Chen.
  • One paper on continual learning for MLLMs is accepted by EMNLP 2025 main.
  • One Zhihu blog on Mechanism of MLLMs is forwarded by PaperWeekly.
  • One paper on compressing diffusion models is out on arXiv.
  • Excited to be a Research Intern at OPPO AI Center.

📝 Publications

EMNLP 2025 main
sym

Merge then Realign: Simple and Effective Modality-Incremental Continual Learning for Multimodal LLMs

Dingkun Zhang, Shuhan Qi, Xinyu Xiao, Kehai Chen, Xuan Wang

Paper

  • Extend existing MLLMs to more modalities efficiently.
  • Minimalist, Anti-Catastrophic Forgetting.
arXiv
sym

LAPTOP-Diff: Layer Pruning and Normalized Distillation for Compressing Diffusion Models

Dingkun Zhang*, Sijia Li*, Chen Chen, Qingsong Xie, Haonan Lu

Paper

  • Compress diffusion models through layer pruning and knowledge distillation.

📖 Educations

  • 2024.06 - present, Master’s Student, Harbin Institute of Technology (Shenzhen).
  • 2020.09 - 2024.06, Undergraduate Student, Harbin Institute of Technology (Shenzhen).

💻 Internships

  • 2025.10 - present, Visiting Student, Hong Kong University of Science and Technology.
  • 2023.09 - 2024.04, Research Intern, OPPO AI Center.