About Me

I am currently a third-year Ph.D. student at Tianjin Key Laboratory of Visual Computing and Intelligent Perception (VCIP), Nankai University, advised by Prof. Xiang Li and Prof. Jian Yang. I am also a research intern at Alibaba DAMO Academy, led by Dr. Yibing Song and Dr. Fan Wang. My research mainly focuses on vision-language models, multi-modal learning and efficient model computing.

The code of my research work will be open-source, and I will also attach a detailed Chinese interpretation of the paper. Although the interpretation may be somewhat fragmented, I will do my best to present the insights and ideas behind the paper.

I am also maintaining a curated list [Links] of prompt learning methods for vision-language models. Feel free to check it out~

The journey of scientific research is challenging, but I’m passionate about my work. If you’re interested in my research or encounter any problems, please contact me via email (zhengli97[at]qq.com).

Educations

  • 2022 - Present. Ph.D., Computer Science and Technology, Nankai University.
  • 2019 - 2022. M.Eng., Computer Applied Technology, Hangzhou Normal University.
  • 2015 - 2019. B.Eng., Communication Engineering, North China University of Science and Technology.

Experiences

Publications

sym

[CVPR 2024] PromptKD: Unsupervised Prompt Distillation for Vision-Language Models.
Zheng Li, Xiang Li, Xinyi Fu, Xin Zhang, Weiqiang Wang, Shuo Chen, Jian Yang.
[Paper][Code][Project Page][中文解读]
PromptKD is a simple and effective prompt-driven unsupervised distillation framework for VLMs, with state-of-the-art performance.

sym

[PR 2024] Dual Teachers for Self-Knowledge Distillation.
Zheng Li, Xiang Li, Lingfeng Yang, Renjie Song, Jian Yang, Zhigeng Pan.
[Paper][PDF][中文解读]
DTSKD explores a new self-KD framework where the student network receives self-supervisions by dual teachers from two dramatically distinct fields.

sym

[AAAI 2023] Curriculum Temperature for Knowledge Distillation.
Zheng Li, Xiang Li, Lingfeng Yang, Borui Zhao, Renjie Song, Lei Luo, Jun Li, Jian Yang.
[Paper][Code][Project Page][中文解读]
CTKD organizes the distillation task from easy to hard through a dynamic and learnable temperature. The temperature is learned during the student’s training process with a reversed gradient that aims to maximize the distillation loss.

sym

[ICCV 2021] Online Knowledge Distillation for Efficient Pose Estimation.
Zheng Li, Jingwen Ye, Mingli Song, Ying Huang, Zhigeng Pan.
[Paper][Code][Project Page][中文解读]
OKDHP first proposes to distill the pose structure knowledge in a one-stage manner. The FAU module integrates students from multiple branches into one teacher, which then distills the student branches in reverse.

sym

[ACCV 2020] Online Knowledge Distillation via Multi-branch Diversity Enhancement.
Zheng Li, Ying Huang, Defang Chen, Tianren Luo, Ning Cai, Zhigeng Pan.
[Paper]
OKDMDE is a simple and effective technique to enhance model diversity in online knowledge distillation.

  • [ECCV 2024] Cascade Prompt Learning for Vision-Language Model Adaptation.
    Ge Wu, Xin Zhang, Zheng Li, Zhaowei Chen, Jiajun Liang, Jian Yang, Xiang Li.
    [Paper] [Code]

  • [CVIU 2023] GEIKD: Self-knowledge Distillation based on Gated Ensemble Networks and Influences-based Label Noise Removal.
    Fuchang Liu, Yu Wang, Zheng Li, Zhigeng Pan.
    [Paper]

Competitions

  • Kaggle Competition Master. 2 Gold Medals. [My Profile]
    • 2019.11. Understanding Clouds from Satellite Images. Rank: 7th/1538 (Top 1%). Gold Medal [Link][Solution]
    • 2018.04. 2018 Data Science Bowl. Rank: 8th/3634 (Top 1%). Solo Gold Medal [Link] [Solution]

Review Services

  • 2022 - Present. AAAI, ECCV, CVPR, ICML, NeurlPS, ICLR, IJCV, TIP, KBS, TNNLS…

Personal Hobbies