Paper
25 March 2023 Most probable mode derivation for multi-view texture map
Author Affiliations +
Proceedings Volume 12592, International Workshop on Advanced Imaging Technology (IWAIT) 2023; 125920J (2023) https://doi.org/10.1117/12.2666964
Event: International Workshop on Advanced Imaging Technology (IWAIT) 2023, 2023, Jeju, Korea, Republic of
Abstract
The versatile video coding (VVC) [1] standard has doubled the number of intra prediction modes and MPM modes in the picture compared to the previous standard, High Efficiency Video Coding (HEVC) [2]. The most probable mode (MPM) is used to efficiently encode the intra prediction mode based on the neighboring intra-coded blocks. The VVC improves the compression performance by increasing the number of intra prediction mode and MPM candidates as the resolution of the video increases, but the texture map may be inefficient because the characteristics of the texture map are different from the general image. In this paper, we propose the efficient MPM candidate derivation on the Truncated Signed Distance Field (TSDF) [3] volume-based mesh property (texture map) for multi-view images. The proposed method shows 0.92% BD-rate performance gain for luma component in the random-access configuration [4].
© (2023) COPYRIGHT Society of Photo-Optical Instrumentation Engineers (SPIE). Downloading of the abstract is permitted for personal use only.
Jihoon Do, Soowoong Kim, and Jung Won Kang "Most probable mode derivation for multi-view texture map", Proc. SPIE 12592, International Workshop on Advanced Imaging Technology (IWAIT) 2023, 125920J (25 March 2023); https://doi.org/10.1117/12.2666964
Advertisement
Advertisement
RIGHTS & PERMISSIONS
Get copyright permission  Get copyright permission on Copyright Marketplace
KEYWORDS
Volume rendering

Video

Video compression

High efficiency video coding

Video coding

Voxels

Discontinuities

Back to Top