Benchmarking domain adaptation for LiDAR-based 3D object detection in autonomous driving

Küçük Resim Yok

Tarih

2025

Dergi Başlığı

Dergi ISSN

Cilt Başlığı

Yayıncı

Springer London Ltd

Erişim Hakkı

info:eu-repo/semantics/closedAccess

Özet

The generalization capability of 3D object detection models is crucial for ensuring robust perception in autonomous driving systems. While state-of-the-art models such as Voxel R-CNN, PV-RCNN, and CenterPoint have demonstrated strong performance on publicly available datasets (e.g., KITTI, Waymo, and nuScenes). In this study, we conduct a comprehensive benchmark evaluation. We introduce two custom datasets: (i) a real-world dataset collected using KARSAN's autonomous minibus equipped with a 128-channel LiDAR sensor under diverse traffic conditions, and (ii) a simulated dataset generated using the AWSIM simulation platform, capturing over five hours of synthetic driving data with virtual LiDAR sensors. Our results indicate that 3D object detection performance is highly dataset-dependent, as no single model achieves superior results across all datasets and metrics. Cross-dataset evaluation highlights the challenges of domain mismatch, which causes significant performance degradation when models are tested on our custom datasets, particularly in the synthetic domain. To mitigate these effects, we explore six domain adaptation techniques and demonstrate that their application substantially improves model performance. Bi3D, SESS, and Uni3D outperform UDA, CLUE, and ST3D, yielding more robust generalization across both real-world and simulated environments. These findings shed light on the potential of domain adaptation to improve model performance across domain shifts, despite the ongoing challenges in achieving consistent outcomes across all environments.

Açıklama

Anahtar Kelimeler

Autonomous driving, LiDAR, Domain adaptation, 3D object detection, Perception

Kaynak

Signal Image and Video Processing

WoS Q Değeri

Q3

Scopus Q Değeri

Q2

Cilt

19

Sayı

12

Künye