We introduce Sam2Point, a preliminary exploration adapting Segment Anything Model 2 (SAM 2) for zero-shot and promptable 3D segmentation. Our framework supports various prompt types, including 3D points, boxes, and masks, and can generalize across diverse scenarios, such as 3D objects, indoor scenes, outdoor scenes, and raw LiDAR.
The Segmentation Paradigm of Sam2Point.
To our best knowledge, Sam2Point presents the most faithful implementation of SAM in 3D, demonstrating superior implementation efficiency, promptable flexibility, and generalization capabilities for 3D segmentation..
Comparison of Sam2Point and Previous SAM-based Methods.
We showcase demonstrations of Sam2Point in segmenting 3D data with various 3D prompt on different datasets.
3D Object Segmentation with Sam2Point on Objaverse (Deitke et al., 2023). The 3D prompt and segmentation results are highlighted in red and green, respectively.
3D Indoor Scene Segmentation with Sam2Point on S3DIS (Armeni et al., 2016). The 3D prompt and segmentation results are highlighted in red and green, respectively.
3D Indoor Scene Segmentation with Sam2Point on ScanNet (Dai et al., 2017). The 3D prompt and segmentation results are highlighted in red and green, respectively.
3D Outdoor Scene Segmentation with Sam2Point on Semantic3D (Hackel et al., 2017). The 3D prompt and segmentation results are highlighted in red and green, respectively.
3D Raw LiDAR Segmentation with Sam2Point on KITTI (Geiger et al., 2012). The 3D prompt and segmentation results are highlighted in red and green, respectively.
We showcase the multi-directional videos generated during the segmentation of Sam2Point.
Multi-directional Videos of 3D Object from Sam2Point.
Multi-directional Videos of 3D Indoor Scene from Sam2Point.
Multi-directional Videos of 3D Outdoor Scene from Sam2Point.
Multi-directional Videos of 3D Raw LiDAR from Sam2Point.
@article{guo2024sam2point,
title={SAM2Point: Segment Any 3D as Videos in Zero-shot and Promptable Manners},
author={Guo, Ziyu and Zhang, Renrui and Zhu, Xiangyang and Tong, Chengzhuo and Gao, Peng and Li, Chunyuan and Heng, Pheng-Ann},
journal={arXiv},
year={2024}
}