Deep-Neural-Network-Enabled Vehicle Detection Using High-Resolution Automotive Radar Imaging

Ruxin Zheng, Shunqiao Sun, Hongshan Liu, Teresa Wu

Research output: Contribution to journalArticlepeer-review

4 Scopus citations

Abstract

Advanced driver assistance systems (ADASs) and autonomous vehicles rely on different types of sensors, such as camera, radar, ultrasonic, and LiDAR, to sense the surrounding environment. Compared with the other types of sensors, millimeter-wave automotive radar has advantages in terms of cost and reliability under bad weather conditions (e.g., snow, rain, and fog) and does not suffer from light condition variations (e.g., darkness). Typical radar devices used in today's commercial vehicles with ADAS features produce sparse point clouds in low angular resolution with a limited number of antennas. In this article, we present a machine-learning-Aided signal processing chain to suppress the radar imaging blur effect introduced by the phase migration in time-division multiplexing multiple-input multiple-output radar, to generate low-level high-resolution radar bird's-eye view (BEV) spectra with rich object's features. Compared with radar point clouds, there is no information loss in radar BEV spectra. We then propose a temporal-fusion distance-Tolerant single-stage object detection network, termed as TDRadarNet, and an enhanced version, TDRadarNet+, to robustly detect vehicles in both long and short ranges on radar BEVs. We introduce a first-of-its-kind multimodel dataset, containing 14 800 frames of high-resolution low-level radar BEV spectra with synchronized stereo camera RGB images and 3-D LiDAR point clouds. Our dataset achieves 0.39-m range resolution and $\text{1.2}^\circ$ degree azimuth angular resolution with 100-m maximum detectable range. Moreover, we create a subdataset, the Doppler Unfolding dataset, containing 244 140 beam vectors extracted from the 3-D radar data cube. With extensive testing and evaluation, we demonstrate that our Doppler unfolding network achieves 93.46% Doppler unfolding accuracy. Compared to YOLOv7, a state-of-The-Art image-based object detection network, TDRadarNet, achieves a 70.3% average precision (AP) for vehicle detection, demonstrating a 21.0% improvement; TDRadarNet+ achieves a 73.9% AP, showing a 24.6% improvement in performance.

Original languageEnglish (US)
Pages (from-to)4815-4830
Number of pages16
JournalIEEE Transactions on Aerospace and Electronic Systems
Volume59
Issue number5
DOIs
StatePublished - Oct 1 2023

Keywords

  • Automotive radar
  • autonomous vehicles
  • deep neural network
  • radar object detection
  • radar spectra

ASJC Scopus subject areas

  • Aerospace Engineering
  • Electrical and Electronic Engineering

Fingerprint

Dive into the research topics of 'Deep-Neural-Network-Enabled Vehicle Detection Using High-Resolution Automotive Radar Imaging'. Together they form a unique fingerprint.

Cite this