ZeroFlow: Fast Zero Label Scene Flow via Distillation

Published in arXiv pre-print, 2023

Abstract: Scene flow estimation is the task of describing the 3D motion field between temporally successive point clouds. State-of-the-art methods use strong priors and test-time optimization techniques, but require on the order of tens of seconds for large-scale point clouds, making them unusable as computer vision primitives for real-time applications such as open world object detection. Feed forward methods are considerably faster, running on the order of tens to hundreds of milliseconds for large-scale point clouds, but require expensive human supervision. To address both limitations, we propose Scene Flow via Distillation, a simple distillation framework that uses a label-free optimization method to produce pseudo-labels to supervise a feed forward model. Our instantiation of this framework, ZeroFlow, produces scene flow estimates in real-time on large-scale point clouds at quality competitive with state-of-the-art methods while using zero human labels. Notably, at test-time ZeroFlow is over 1000× faster than label-free state-of-the-art optimization-based methods on large-scale point clouds and over 1000× cheaper to train on unlabeled data compared to the cost of human annotation of that data. To facilitate research reuse, we release our code, trained model weights, and high quality pseudo-labels for the Argoverse 2 and Waymo Open datasets.

Arxiv Preprint

Bibtex:

@misc{vedder2023zeroflow,
      title={ZeroFlow: Fast Zero Label Scene Flow via Distillation},
      author={Kyle Vedder and Neehar Peri and Nathaniel Chodosh and Ishan Khatri and Eric Eaton and Dinesh Jayaraman and Yang Liu and Deva Ramanan and James Hays},
      year={2023},
      eprint={2305.10424},
      archivePrefix={arXiv},
      primaryClass={cs.CV}
}