ADCNet: End-to-end perception with raw radar ADC data
Published in arXiv pre-print, 2023
Abstract: As autonomous vehicles and advanced driving assistance systems have entered wider deployment, there is an increased interest in building robust perception systems using radars. Radar-based systems are lower cost and more robust to adverse weather conditions than their LiDAR-based counterparts; however the point clouds produced are typically noisy and sparse by comparison. In order to combat these challenges, recent research has focused on consuming the raw radar data, instead of the final radar point cloud. We build on this line of work and demonstrate that by bringing elements of the signal processing pipeline into our network and then pre-training on the signal processing task, we are able to achieve state of the art detection performance on the RADIal dataset. Our method uses expensive offline signal processing algorithms to pseudo-label data and trains a network to distill this information into a fast convolutional backbone, which can then be finetuned for perception tasks. Extensive experiment results corroborate the effectiveness of the proposed techniques.
Links
Bibtex:
@misc{yang2023adcnet,
title={ADCNet: End-to-end perception with raw radar ADC data},
author={Bo Yang and Ishan Khatri and Michael Happold and Chulong Chen},
year={2023},
eprint={2303.11420},
archivePrefix={arXiv},
primaryClass={eess.SP}
}