Reinforcement learning-based multi-objective smart energy management for electric vehicle charging stations with priority scheduling

dc.authorid0000-0001-5939-9042
dc.contributor.authorCakil, Fatih
dc.contributor.authorAksoy, Necati
dc.date.accessioned2026-02-08T15:15:12Z
dc.date.available2026-02-08T15:15:12Z
dc.date.issued2025
dc.departmentBursa Teknik Üniversitesi
dc.description.abstractReinforcement learning (RL)-based control structures represent a transformative approach to optimizing energy management in electric vehicle (EV) charging stations, offering unparalleled adaptability and efficiency. This paper introduces a novel RL-based intelligent control strategy, designed to address multi-objective challenges in EV charging, such as energy efficiency, cost-effectiveness, and user prioritization. Central to this study is the development of a unique environment model, which incorporates dynamic variables including vehicle priority, arrival times, and real-time pricing data, ensuring realistic and practical applications. Additionally, a custom reward strategy is proposed, enabling the RL agents to effectively learn and adapt to complex operational demands. The study evaluates the performance of three RL algorithms-Q-Learning, SARSA, and Expected SARSA-within the proposed environment model, demonstrating their capabilities in reducing charging costs and improving profitability. Experimental results indicate that the Q-Learning agent achieved an average cost reduction of up to 66.4% compared to conventional charging strategies, with energy costs dropping from 11.78 to 3.96 per unit in high-priority cases. Expected SARSA exhibited a competitive performance, yielding up to 44.3% cost savings, whereas SARSA consistently resulted in the lowest cost efficiency, with reductions of only 46.6% in comparable scenarios. Furthermore, the RL-based scheduling framework successfully shortened peak-hour waiting times by 40%, while ensuring equitable prioritization of charging requests. Through a comprehensive set of realistic case studies and scenarios, the effectiveness of these algorithms is analyzed, focusing on their capacity to manage energy costs, enhance profitability, and adapt to fluctuating pricing conditions.
dc.identifier.doi10.1016/j.energy.2025.135475
dc.identifier.issn0360-5442
dc.identifier.issn1873-6785
dc.identifier.scopus2-s2.0-86000637820
dc.identifier.scopusqualityQ1
dc.identifier.urihttps://doi.org/10.1016/j.energy.2025.135475
dc.identifier.urihttps://hdl.handle.net/20.500.12885/5660
dc.identifier.volume322
dc.identifier.wosWOS:001448179700001
dc.identifier.wosqualityQ1
dc.indekslendigikaynakWeb of Science
dc.indekslendigikaynakScopus
dc.language.isoen
dc.publisherPergamon-Elsevier Science Ltd
dc.relation.ispartofEnergy
dc.relation.publicationcategoryMakale - Uluslararası Hakemli Dergi - Kurum Öğretim Elemanı
dc.rightsinfo:eu-repo/semantics/closedAccess
dc.snmzWOS_KA_20260207
dc.subjectReinforcement learning
dc.subjectEnergy management
dc.subjectElectric vehicle
dc.subjectCharging control
dc.subjectMachine learning
dc.titleReinforcement learning-based multi-objective smart energy management for electric vehicle charging stations with priority scheduling
dc.typeArticle

Dosyalar