Online Knowledge Distillation for
Efficient Pose Estimation

Zheng Li1, Jingwen Ye2, Mingli Song2, Ying Huang1, Zhigeng Pan1*

1HZNU, 2ZJU
ICCV 2021
*Indicates Corresponding Author

zhengli97@qq.com

Abstract

Existing state-of-the-art human pose estimation methods require heavy computational resources for accurate predictions. One promising technique to obtain an accurate yet lightweight pose estimator is knowledge distillation, which distills the pose knowledge from a powerful teacher model to a less-parameterized student model. However, existing pose distillation works rely on a heavy pre-trained estimator to perform knowledge transfer and require a complex two-stage learning procedure. In this work, we investigate a novel Online Knowledge Distillation framework by distilling Human Pose structure knowledge in a one-stage manner to guarantee the distillation efficiency, termed OKDHP. Specifically, OKDHP trains a single multi-branch network and acquires the predicted heatmaps from each, which are then assembled by a Feature Aggregation Unit (FAU) as the target heatmaps to teach each branch in reverse. Instead of simply averaging the heatmaps, FAU which consists of multiple parallel transformations with different receptive fields, leverages the multi-scale information, thus obtains target heatmaps with higher-quality. Specifically, the pixelwise Kullback-Leibler (KL) divergence is utilized to minimize the discrepancy between the target heatmaps and the predicted ones, which enables the student network to learn the implicit keypoint relationship. Besides, an unbalanced OKDHP scheme is introduced to customize the student networks with different compression rates. The effectiveness of our approach is demonstrated by extensive experiments on two common benchmark datasets, MPII and COCO.

Framework

Fig.1 An overview of the proposed Online Knowledge Distillation for Human Pose estimation (OKDHP). Each branch serves as an independent pose estimator. The FAU learns to ensemble all branches to establish a stronger teacher model. Lkl denotes the KL divergence loss between intermediate heatmaps and ensemble heatmaps.