Paper
25 May 2023 Lightweight human pose estimation based on high-resolution network
Zhiwen Yang, Ruan Yang, Yunong Yang
Author Affiliations +
Proceedings Volume 12636, Third International Conference on Machine Learning and Computer Application (ICMLCA 2022); 126364U (2023) https://doi.org/10.1117/12.2675120
Event: Third International Conference on Machine Learning and Computer Application (ICMLCA 2022), 2022, Shenyang, China
Abstract
Most of the existing methods of human pose estimation methods focus on improving the accuracy of prediction results, but due to network parameters and high computational complexity, a great deal of computing resources are needed. In this paper, a lightweight human pose estimation method based on high-resolution network is proposed. The bottleneck module and the basic module in the high-resolution network are redesigned by using the depth separable convolution instead of the ordinary convolution and integrating attention mechanism, which ensures the accuracy of the network and greatly reduces the number of parameters and computational complexity of the model. Experimental results on the COCO VAL2017 dataset show an 84.5% reduction in the number of model parameters, a 73.9%% reduction in computational complexity, and 0.2% increase in the accuracy of human keypoint detection compared to the high-resolution network.
© (2023) COPYRIGHT Society of Photo-Optical Instrumentation Engineers (SPIE). Downloading of the abstract is permitted for personal use only.
Zhiwen Yang, Ruan Yang, and Yunong Yang "Lightweight human pose estimation based on high-resolution network", Proc. SPIE 12636, Third International Conference on Machine Learning and Computer Application (ICMLCA 2022), 126364U (25 May 2023); https://doi.org/10.1117/12.2675120
Advertisement
Advertisement
RIGHTS & PERMISSIONS
Get copyright permission  Get copyright permission on Copyright Marketplace
KEYWORDS
Convolution

Pose estimation

Image enhancement

Autoregressive models

Education and training

Image resolution

Ablation

RELATED CONTENT


Back to Top