Visual-based obstacle avoidance method using advanced CNN for mobile robots

dc.authorid0000-0001-6909-7830
dc.authorid0000-0002-3785-1795
dc.contributor.authorMisir, Oguz
dc.contributor.authorCelik, Muhammed
dc.date.accessioned2026-02-08T15:15:19Z
dc.date.available2026-02-08T15:15:19Z
dc.date.issued2025
dc.departmentBursa Teknik Üniversitesi
dc.description.abstractArtificial intelligence is one of the key factors accelerating the development of cyber-physical systems. Autonomous robots, in particular, heavily rely on deep learning technologies for sensing and interpreting their environments. In this context, this paper presents an extended MobileNetV2-based obstacle avoidance method for mobile robots. The deep network architecture used in the proposed method has a low number of parameters, making it suitable for deployment on mobile devices that do not require high computational power. To implement the proposed method, a two-wheeled non-holonomic mobile robot was designed. This mobile robot was equipped with a Jetson Nano development board to utilize deep network architectures. Additionally, camera and ultrasonic sensor data were used to enable the mobile robot to detect obstacles. To test the performance of the proposed method, three different obstacle-filled environments were designed to simulate real-world conditions. A unique dataset was created by combining images with sensor data collected from the environment. This dataset was generated by adding light and dark shades of red, blue, and green to the camera images, correlating the color intensity with the obstacle distance measured by the ultrasonic sensor. The extended MobileNetV2 architecture, developed for the obstacle avoidance task, was trained on this dataset and compared with state-of-the-art low-parameter Convolutional Neural Network (CNN) models. Based on the results, the proposed deep learning architecture outperformed the other models, achieving 92.78 % accuracy. Furthermore, the mobile robot successfully completed the obstacle avoidance task in real-world applications.
dc.identifier.doi10.1016/j.iot.2025.101538
dc.identifier.issn2543-1536
dc.identifier.issn2542-6605
dc.identifier.scopus2-s2.0-85217666040
dc.identifier.scopusqualityQ1
dc.identifier.urihttps://doi.org/10.1016/j.iot.2025.101538
dc.identifier.urihttps://hdl.handle.net/20.500.12885/5708
dc.identifier.volume31
dc.identifier.wosWOS:001490067000001
dc.identifier.wosqualityQ1
dc.indekslendigikaynakWeb of Science
dc.indekslendigikaynakScopus
dc.language.isoen
dc.publisherElsevier
dc.relation.ispartofInternet of Things
dc.relation.publicationcategoryMakale - Uluslararası Hakemli Dergi - Kurum Öğretim Elemanı
dc.rightsinfo:eu-repo/semantics/closedAccess
dc.snmzWOS_KA_20260207
dc.subjectCyber-physical systems
dc.subjectMobile robots
dc.subjectDeep learning
dc.titleVisual-based obstacle avoidance method using advanced CNN for mobile robots
dc.typeArticle

Dosyalar