We introduce RPM-Net, a deep learning-based approach which simultaneously infers movable parts and hallucinates their motions from a single, un-segmented, and possibly partial, 3D point cloud shape. RPM-Net is a novel Recurrent Neural Network (RNN), composed of an encoder-decoder pair with interleaved Long Short-Term Memory (LSTM) components, which together predict a temporal sequence of pointwise displacements for the input point cloud. At the same time, the displacements allow the network to learn movable parts, resulting in a motion-based shape segmentation. Recursive applications of RPM-Net on the obtained parts can predict finer-level part motions, resulting in a hierarchical object segmentation. Furthermore, we develop a separate network to estimate part mobilities, e.g., per-part motion parameters, from the segmented motion sequence. Both networks learn deep predictive models from a training set that exemplifies a variety of mobilities for diverse objects. We show results of simultaneous motion and part predictions from synthetic and real scans of 3D objects exhibiting a variety of part mobilities, possibly involving multiple movable parts.

Motion prediction, Part mobility, Partial scans, Point clouds, Shape analysis
ACM Transactions on Graphics
School of Computer Science

Yan, Z. (Zihao), Hu, R. (Ruizhen), Yan, X. (Xingguang), Chen, L. (Luanmin), van Kaick, O, Zhang, H. (Hao), & Huang, H. (Hui). (2019). RPM-Net: Recurrent prediction of motion and parts from point cloud. ACM Transactions on Graphics, 38(6). doi:10.1145/3355089.3356573