Learn to Predict How Humans Manipulate Large-Sized Objects From Interactive Motions
Weilin Wan, Lei Yang, Lingjie Liu, Zhuoying Zhang, Ruixing Jia, Yi-King Choi
Jia Pan, Christian Theobalt,Taku Komura, Wenping Wang

[Paper]
[GitHub]
[Dataset]
Our goal is to predict the human-object motion at future time steps

Introduction

    We focus on full-body human interactions with large-sized daily objects and aim to predict the future states of objects and humans given a sequential observation of human-object interaction. As there is no such dataset dedicated to full-body human interactions with large-sized daily objects, we collected a large-scale dataset containing thousands of interactions for training and evaluation purposes. We also observe that an object's intrinsic physical properties are useful for the object motion prediction, and thus design a set of object dynamic descriptors to encode such intrinsic properties. We treat the object dynamic descriptors as a new modality and propose a graph neural network, HO-GCN, to fuse motion data and dynamic descriptors for the prediction task.


Video


Material


Learn to Predict How Humans Manipulate Large-Sized Objects From Interactive Motions
2022 IEEE Robotics and Automation Letters

[Paper] [GitHub] [Dataset] [Slides]


Citation

[Bibtex]
@article{wan2022learn,
	title={Learn to Predict How Humans Manipulate Large-Sized Objects From Interactive Motions},
	author={Wan, Weilin and Yang, Lei and Liu, Lingjie and Zhang, Zhuoying and Jia, Ruixing and Choi, Yi-King 
		and Pan, Jia and Theobalt, Christian and Komura, Taku and Wang, Wenping},
	journal={IEEE Robotics and Automation Letters},
	volume={7},
	number={2},
	pages={4702--4709},
	year={2022},
	publisher={IEEE}
}						  
					


Please contact wanwl@connect.hku.hk if you have any questions!



Thanks Richard Zhang for the website template.