Return to search

Reinforcement learning for EV charging optimization : A holistic perspective for commercial vehicle fleets

Recent years have seen an unprecedented uptake in electric vehicles, driven by the global push to reduce carbon emissions. At the same time, intermittent renewables are being deployed increasingly. These developments are putting flexibility measures such as dynamic load management in the spotlight of the energy transition. Flexibility measures must consider EV charging, as it has the ability to introduce grid constraints: In Germany, the cumulative power of all EV onboard chargers amounts to ca. 120 GW, while the German peak load only amounts to 80 GW. Commercial operations have strong incentives to optimize charging and flatten peak loads in real-time, given that the highest quarter-hour can determine the power-related energy bill, and that a blown fuse due to overloading can halt operations. Increasing research efforts have therefore gone into real-time-capable optimization methods. Reinforcement Learning (RL) has particularly gained attention due to its versatility, performance and realtime capabilities. This thesis implements such an approach and introduces FleetRL as a realistic RL environment for EV charging, with a focus on commercial vehicle fleets. Through its implementation, it was found that RL saved up to 83% compared to static benchmarks, and that grid overloading was entirely avoided in some scenariosby sacrificing small portions of SOC, or by delaying the charging process. Linear optimization with one year of perfect knowledge outperformed RL, but reached its practical limits in one use-case, where a feasible solution could not be found by thesolver. Overall, this thesis makes a strong case for RL-based EV charging. It further provides a foundation which can be built upon: a modular, open-source software framework that integrates an MDP model, schedule generation, and non-linear battery degradation. / Elektrifieringen av transportsektorn är en nödvändig men utmanande uppgift. I kombination med ökande solcellsproduktion och förnybara energikällor skapar det ett dilemma för elnätet som kräver omfattande flexibilitetsåtgärder. Dessa åtgärder måste inkludera laddning av elbilar, ett fenomen som har lett till aldrig tidigare skådade belastningstoppar. Ur ett kommersiellt perspektiv är incitamentet att optimera laddningsprocessen och säkerställa drifttid. Forskningen har fokuserat på realtidsoptimeringsmetoder som Deep Reinforcement Learning (DRL). Denna avhandling introducerar FleetRL som en ny RL-miljö för EV-laddning av kommersiella flottor. Genom att tillämpa ramverket visade det sig att RL sparade upp till 83% jämfört med statiska riktmärken, och att överbelastning av nätet helt kunde undvikas i de flesta scenarier. Linjär optimering överträffade RL men nådde sina gränser i snävt begränsade användningsfall. Efter att ha funnit ett positivt business case förvarje kommersiellt användningsområde, ger denna avhandling ett starkt argument för RL-baserad laddning och en grund för framtida arbete via praktiska insikter och ett modulärt mjukvaruramverk med öppen källkod.

Identiferoai:union.ndltd.org:UPSALLA1/oai:DiVA.org:kth-336593
Date January 2023
CreatorsCording, Enzo Alexander
PublisherKTH, Skolan för industriell teknik och management (ITM)
Source SetsDiVA Archive at Upsalla University
LanguageEnglish
Detected LanguageEnglish
TypeStudent thesis, info:eu-repo/semantics/bachelorThesis, text
Formatapplication/pdf
Rightsinfo:eu-repo/semantics/openAccess
RelationTRITA-ITM-EX ; 2023:524

Page generated in 0.0131 seconds