
The overview of EventEgoHands.
Reconstructing 3D hand mesh is challenging but an important task for human-computer interaction and AR/VR applications. In particular, RGB and/or depth cameras have been widely used for this task. However, methods using these conventional cameras face challenges in low-light environments and motion blur. Thus, event cameras have been getting attention in recent years for their high dynamic range and high temporal resolution to address these limitations. Despite their advantages, event cameras are sensitive to noise caused by background or camera motion, which has constrained existing studies to static backgrounds and fixed cameras. In this work, we propose EventEgoHands, a novel method for event-based 3D hand mesh reconstruction in egocentric view. Our approach introduces a Hand Segmentation Module that extracts hand regions, effectively mitigating the influence of dynamic background events. We evaluated our approach and demonstrated its effectiveness on N-HOT3D dataset, improving MPJPE by approximately more than 4.5cm (43%) .
The overview of EventEgoHands.
Sample from N-HOT3D. We use the MANO ground-truth annotations and RGB image directly from
HOT3D,
while independently providing the raw event and hand mask.
Qualitative Evaluation.
We compared our method with EventHands and Ev2Hands as baselines.
Red arrows indicate failure parts.
RGB images were not used as input but only as a reference
@inproceedings{Hara2025EventEgoHands,
author = {Hara, Ryosei and Ikeda, Wataru and Hatano, Masashi and Isogawa, Mariko},
title = {EventEgoHands: Event-based Egocentric 3D Hand Mesh Reconstruction},
booktitle = {IEEE International Conference on Image Processing (ICIP)},
year = {2025},
}