×

P2MAT-NET: learning medial axis transform from sparse point clouds. (English) Zbl 1508.68392

Summary: The medial axis transform (MAT) of a 3D shape includes the set of centers and radii of the maximally inscribed spheres, and is a complete shape descriptor that can be used to reconstruct the original shape. It is a compact representation that jointly describes geometry, topology, and symmetry properties of a given shape. In this work, we present P2MAT-NET, a neural network which learns the pattern of sparse point clouds and transform them into spheres approximating MAT. The experimental results illustrate that P2MAT-NET demonstrates better performance than state-of-the-art methods in computing MAT from point clouds, in terms of MAT quality to approximate the 3D shapes. The computed MAT can be used as an intermediate descriptor for downstream applications such as 3D shape recognition from point clouds. Our results show that it can achieve competitive performance in recognition with state-of-the-art methods.

MSC:

68U05 Computer graphics; computational geometry (digital and algorithmic aspects)
68T05 Learning and adaptive systems in artificial intelligence
68T10 Pattern recognition, speech recognition
Full Text: DOI

References:

[1] Amenta, N.; Bern, M., Surface reconstruction by Voronoi filtering, Discrete Comput. Geom., 22, 481-504 (1999) · Zbl 0939.68138
[2] Amenta, N.; Choi, S.; Kolluri, R. K., The power crust, (Proceedings of ACM Symposium on Solid Modeling (2001)), 244-266
[3] Ben-Shabat, Y.; Lindenbaum, M.; Fischer, A., Nesti-net: normal estimation for unstructured 3d point clouds using convolutional neural networks, (The IEEE Conference on Computer Vision and Pattern Recognition (CVPR) (2019)), 10112-10120
[4] Berger, M.; Levine, J. A.; Nonato, L. G.; Taubin, G.; Silva, C. T., A benchmark for surface reconstruction, ACM Trans. Graph., 32, Article 20 pp. (2013) · Zbl 1322.68211
[5] Berkiten, S.; Halber, M.; Solomon, J.; Ma, C.; Li, H.; Rusinkiewicz, S., Learning detail transfer based on geometric features, Comput. Graph. Forum, 36, 361-373 (2017)
[6] Blum, H., A transformation for extracting new descriptors of shape, (Models for the Perception of Speech & Visual Form, vol. 19 (1967)), 362-380
[7] Guerrero, P.; Kleiman, Y.; Ovsjanikov, M.; Mitra, N. J., PCPNet: learning local shape properties from raw point clouds, Comput. Graph. Forum, 37, 75-85 (2018)
[8] Hu, J.; Wang, B.; Qian, L.; Pan, Y.; Guo, X.; Liu, L.; Wang, W., MAT-Net: medial axis transform network for 3d object recognition, (Proceedings of the International Joint Conference on Artificial Intelligence (IJCAI’19) (2019)), 774-781
[9] Huang, Z.; Carr, N.; Ju, T., Variational implicit point set surfaces, ACM Trans. Graph., 38, Article 124 pp. (2019)
[10] Kazhdan, M.; Hoppe, H., Screened Poisson surface reconstruction, ACM Trans. Graph., 32, Article 29 pp. (2013) · Zbl 1322.68228
[11] Kazhdan, M.; Bolitho, M.; Hoppe, H., Poisson surface reconstruction, (Proceedings of the Eurographics Symposium on Geometry Processing (SGP’06) (2006)), 61-70
[12] Li, J.; Chen, B. M.; Lee, G. H., SO-Net: self-organizing network for point cloud analysis, (Proceedings of IEEE Conference on Computer Vision and Pattern Recognition (CVPR’18) (2018)), 9397-9406
[13] Li, P.; Wang, B.; Sun, F.; Guo, X.; Zhang, C.; Wang, W., Q-MAT: computing medial axis transform by quadratic error minimization, ACM Trans. Graph., 35 (2015)
[14] Li, Y.; Bu, R.; Sun, M.; Wu, W.; Di, X.; Chen, B., PointCNN: convolution on x-transformed points, (Proceedings of the International Conference on Neural Information Processing Systems (NeurIPS’18) (2018)), 828-838
[15] Lu, W.; Shi, Z.; Sun, J.; Wang, B., Surface reconstruction based on the modified Gauss formula, ACM Trans. Graph., 38, Article 2 pp. (2018)
[16] Ma, J.; Bae, S. W.; Choi, S., 3D medial axis point approximation using nearest neighbors and the normal field, Vis. Comput., 28, 7-19 (2012)
[17] Monti, F.; Boscaini, D.; Masci, J.; Rodolà, E.; Svoboda, J.; Bronstein, M. M., Geometric deep learning on graphs and manifolds using mixture model CNNs, (Proceedings of IEEE Conference on Computer Vision and Pattern Recognition (CVPR’17) (2017)), 5425-5434
[18] Pan, Y.; Wang, B.; Guo, X.; Zeng, H.; Ma, Y.; Wang, W., Q-MAT+: an error-controllable and feature-sensitive simplification algorithm for medial axis transform, Comput. Aided Geom. Des., 71, 16-29 (2019) · Zbl 1505.65135
[19] Qi, C. R.; Su, H.; Mo, K.; Guibas, L. J., PointNet: deep learning on point sets for 3D classification and segmentation, (Proceedings of IEEE Conference on Computer Vision and Pattern Recognition (CVPR’17) (2017)), 652-660
[20] Qi, C. R.; Yi, L.; Su, H.; Guibas, L. J., PointNet++: deep hierarchical feature learning on point sets in a metric space, (Proceedings of the International Conference on Neural Information Processing Systems (NeurIPS’17) (2017)), 5105-5114
[21] Rebain, D.; Angles, B.; Valentin, J. P.C.; Vining, N.; Peethambaran, J.; Izadi, S.; Tagliasacchi, A., LSMAT least squares medial axis transform, Comput. Graph. Forum (2019)
[22] Sun, F.; Choi, Y. K.; Yu, Y.; Wang, W., Medial meshes – a compact and accurate representation of medial axis transform, IEEE Trans. Vis. Comput. Graph., 22, 1278-1290 (2016)
[23] Tagliasacchi, A.; Delame, T.; Spagnuolo, M.; Amenta, N.; Telea, A., 3D skeletons: a state-of-the-art report, Comput. Graph. Forum, 35, 573-597 (2016)
[24] Tang, J.; Han, X.; Pan, J.; Jia, K.; Tong, X., A skeleton-bridged deep learning approach for generating meshes of complex topologies from single RGB images, (Proceedings of IEEE Conference on Computer Vision and Pattern Recognition (CVPR’19) (2019)), 4541-4550
[25] The CGAL Project, CGAL user and reference manual (2019), 4.14 ed., CGAL Editorial Board
[26] Thiery, J. M.; Guy, E.; Boubekeur, T., Sphere-meshes: shape approximation using spherical quadric error metrics, ACM Trans. Graph., 32, 1 (2013)
[27] Wang, P. S.; Liu, Y.; Guo, Y. X.; Sun, C. Y.; Tong, X., O-CNN: octree-based convolutional neural networks for 3d shape analysis, ACM Trans. Graph., 36 (2017)
[28] Wang, C.; Samari, B.; Siddiqi, K., Local spectral graph convolution for point set feature learning, (Proceedings of the European Conference on Computer Vision (ECCV’18) (2018)), 56-71
[29] Wang, S.; Suo, S.; Ma, W.; Pokrovsky, A.; Urtasun, R., Deep parametric continuous convolutional neural networks, (Proceedings of IEEE Conference on Computer Vision and Pattern Recognition (CVPR’18) (2018)), 2589-2597
[30] Wang, Y.; Sun, Y.; Liu, Z.; Sarma, S. E.; Bronstein, M. M.; Solomon, J. M., Dynamic graph CNN for learning on point clouds, ACM Trans. Graph. (TOG) (2019)
[31] Wu, Z.; Song, S.; Khosla, A.; Yu, F.; Zhang, L.; Tang, X.; Xiao, J., 3D ShapeNets: a deep representation for volumetric shapes, (Proceedings of IEEE Conference on Computer Vision and Pattern Recognition (CVPR’15) (2015)), 1912-1920
[32] Yang, B.; Yao, J.; Guo, X., DMAT: deformable medial axis transform for animated mesh approximation, Comput. Graph. Forum, 37, 301-311 (2018)
[33] Yifan, W.; Wu, S.; Huang, H.; Cohen-Or, D.; Sorkine-Hornung, O., Patch-based progressive 3d point set upsampling, (The IEEE Conference on Computer Vision and Pattern Recognition (CVPR) (2019)), 5951-5960
[34] Yin, K.; Huang, H.; Cohen-Or, D.; Zhang, H., P2P-NET: bidirectional point displacement net for shape transform, ACM Trans. Graph., 37 (2018)
[35] Yu, L.; Li, X.; Fu, C. W.; Cohen-Or, D.; Heng, P. A., Ec-net: an edge-aware point set consolidation network, (ECCV (2018)), 386-402
[36] Yu, L.; Li, X.; Fu, C. W.; Cohen-Or, D.; Heng, P. A., Pu-net: point cloud upsampling network, (Proceedings of IEEE Conference on Computer Vision and Pattern Recognition (CVPR) (2018)), 2790-2799
This reference list is based on information provided by the publisher or from digital mathematics libraries. Its items are heuristically matched to zbMATH identifiers and may contain data conversion errors. In some cases that data have been complemented/enhanced by data from zbMATH Open. This attempts to reflect the references listed in the original paper as accurately as possible without claiming completeness or a perfect matching.