Arşiv logosu
  • Türkçe
  • English
  • Giriş
    Yeni kullanıcı mısınız? Kayıt için tıklayın. Şifrenizi mi unuttunuz?
Arşiv logosu
  • Koleksiyonlar
  • DSpace İçeriği
  • Analiz
  • Türkçe
  • English
  • Giriş
    Yeni kullanıcı mısınız? Kayıt için tıklayın. Şifrenizi mi unuttunuz?
  1. Ana Sayfa
  2. Yazara Göre Listele

Yazar "Misir, Oguz" seçeneğine göre listele

Listeleniyor 1 - 2 / 2
Sayfa Başına Sonuç
Sıralama seçenekleri
  • Küçük Resim Yok
    Öğe
    Drivable path detection for a mobile robot with differential drive using a deep Learning based segmentation method for indoor navigation
    (Peerj Inc, 2024) Misir, Oguz
    The integration of artificial intelligence into the field of robotics enables robots to perform their tasks more meaningfully. In particular, deep-learning methods contribute significantly to robots becoming intelligent cybernetic systems. The effective use of deep-learning mobile cyber-physical systems has enabled mobile robots to become more intelligent. This effective use of deep learning can also help mobile robots determine a safe path. The drivable pathfinding problem involves a mobile robot finding the path to a target in a challenging environment with obstacles. In this paper, a semantic-segmentation-based drivable path detection method is presented for use in the indoor navigation of mobile robots. The proposed method uses a perspective transformation strategy based on transforming high-accuracy segmented images into real-world space. This transformation enables the motion space to be divided into grids, based on the image perceived in a real-world space. A grid-based RRT* navigation strategy was developed that uses images divided into grids to enable the mobile robot to avoid obstacles and meet the optimal path requirements. Smoothing was performed to improve the path planning of the grid-based RRT* and avoid unnecessary turning angles of the mobile robot. Thus, the mobile robot could reach the target in an optimum manner in the drivable area determined by segmentation. Deeplabv3+ and ResNet50 backbone architecture with superior segmentation ability are proposed for accurate determination of drivable path. Gaussian filter was used to reduce the noise caused by segmentation. In addition, multi-otsu thresholding was used to improve the masked images in multiple classes. The segmentation model and backbone architecture were compared in terms of their performance using different methods. DeepLabv3+ and ResNet50 backbone architectures outperformed the other compared methods by 0.21%-4.18% on many metrics. In addition, a mobile robot design is presented to test the proposed drivable path determination method. This design validates the proposed method by using different scenarios in an indoor environment.
  • Küçük Resim Yok
    Öğe
    Visual-based obstacle avoidance method using advanced CNN for mobile robots
    (Elsevier, 2025) Misir, Oguz; Celik, Muhammed
    Artificial intelligence is one of the key factors accelerating the development of cyber-physical systems. Autonomous robots, in particular, heavily rely on deep learning technologies for sensing and interpreting their environments. In this context, this paper presents an extended MobileNetV2-based obstacle avoidance method for mobile robots. The deep network architecture used in the proposed method has a low number of parameters, making it suitable for deployment on mobile devices that do not require high computational power. To implement the proposed method, a two-wheeled non-holonomic mobile robot was designed. This mobile robot was equipped with a Jetson Nano development board to utilize deep network architectures. Additionally, camera and ultrasonic sensor data were used to enable the mobile robot to detect obstacles. To test the performance of the proposed method, three different obstacle-filled environments were designed to simulate real-world conditions. A unique dataset was created by combining images with sensor data collected from the environment. This dataset was generated by adding light and dark shades of red, blue, and green to the camera images, correlating the color intensity with the obstacle distance measured by the ultrasonic sensor. The extended MobileNetV2 architecture, developed for the obstacle avoidance task, was trained on this dataset and compared with state-of-the-art low-parameter Convolutional Neural Network (CNN) models. Based on the results, the proposed deep learning architecture outperformed the other models, achieving 92.78 % accuracy. Furthermore, the mobile robot successfully completed the obstacle avoidance task in real-world applications.

| Bursa Teknik Üniversitesi | Kütüphane | Açık Erişim Politikası | Rehber | OAI-PMH |

Bu site Creative Commons Alıntı-Gayri Ticari-Türetilemez 4.0 Uluslararası Lisansı ile korunmaktadır.


Mimar Sinan Mahallesi Mimar, Sinan Bulvarı, Eflak Caddesi, No: 177, 16310, Yıldırım, Bursa, Türkiye
İçerikte herhangi bir hata görürseniz lütfen bize bildirin

DSpace 7.6.1, Powered by İdeal DSpace

DSpace yazılımı telif hakkı © 2002-2026 LYRASIS

  • Çerez ayarları
  • Gizlilik politikası
  • Son Kullanıcı Sözleşmesi
  • Geri bildirim Gönder