Balim, Mustafa AlperHanilci, CemalAcir, Nurettin2026-02-082026-02-0820251863-17031863-1711https://doi.org/10.1007/s11760-025-04580-zhttps://hdl.handle.net/20.500.12885/5561The generalization capability of 3D object detection models is crucial for ensuring robust perception in autonomous driving systems. While state-of-the-art models such as Voxel R-CNN, PV-RCNN, and CenterPoint have demonstrated strong performance on publicly available datasets (e.g., KITTI, Waymo, and nuScenes). In this study, we conduct a comprehensive benchmark evaluation. We introduce two custom datasets: (i) a real-world dataset collected using KARSAN's autonomous minibus equipped with a 128-channel LiDAR sensor under diverse traffic conditions, and (ii) a simulated dataset generated using the AWSIM simulation platform, capturing over five hours of synthetic driving data with virtual LiDAR sensors. Our results indicate that 3D object detection performance is highly dataset-dependent, as no single model achieves superior results across all datasets and metrics. Cross-dataset evaluation highlights the challenges of domain mismatch, which causes significant performance degradation when models are tested on our custom datasets, particularly in the synthetic domain. To mitigate these effects, we explore six domain adaptation techniques and demonstrate that their application substantially improves model performance. Bi3D, SESS, and Uni3D outperform UDA, CLUE, and ST3D, yielding more robust generalization across both real-world and simulated environments. These findings shed light on the potential of domain adaptation to improve model performance across domain shifts, despite the ongoing challenges in achieving consistent outcomes across all environments.eninfo:eu-repo/semantics/closedAccessAutonomous drivingLiDARDomain adaptation3D object detectionPerceptionBenchmarking domain adaptation for LiDAR-based 3D object detection in autonomous drivingArticle10.1007/s11760-025-04580-z1912WOS:0015671137000292-s2.0-105015076209Q3Q2