Spelling suggestions: "subject:"[een] TD3"" "subject:"[enn] TD3""
1 |
[pt] ESTUDO DE TÉCNICAS DE APRENDIZADO POR REFORÇO APLICADAS AO CONTROLE DE PROCESSOS QUÍMICOS / [en] STUDY OF REINFORCEMENT LEARNING TECHNIQUES APPLIED TO THE CONTROL OF CHEMICAL PROCESSES30 December 2021 (has links)
[pt] A indústria 4.0 impulsionou o desenvolvimento de novas tecnologias
para atender as demandas atuais do mercado. Uma dessas novas tecnologias
foi a incorporação de técnicas de inteligência computacional no cotidiano
da indústria química. Neste âmbito, este trabalho avaliou o desempenho de
controladores baseados em aprendizado por reforço em processos químicos
industriais. A estratégia de controle interfere diretamente na segurança e
no custo do processo. Quanto melhor for o desempenho dessa estrategia,
menor será a produção de efluentes e o consumo de insumos e energia. Os
algoritmos de aprendizado por reforço apresentaram excelentes resultados
para o primeiro estudo de caso, o reator CSTR com a cinética de Van de
Vusse. Entretanto, para implementação destes algoritmos na planta química
do Tennessee Eastman Process mostrou-se que mais estudos são necessários.
A fraca ou inexistente propriedade Markov, a alta dimensionalidade e as
peculiaridades da planta foram fatores dificultadores para os controladores
desenvolvidos obterem resultados satisfatórios. Foram avaliados para o estudo
de caso 1, os algoritmos Q-Learning, Actor Critic TD, DQL, DDPG, SAC e
TD3, e para o estudo de caso 2 foram avaliados os algoritmos CMA-ES, TRPO,
PPO, DDPG, SAC e TD3. / [en] Industry 4.0 boosted the development of new technologies to meet
current market demands. One of these new technologies was the incorporation
of computational intelligence techniques into the daily life of the chemical
industry. In this context, this present work evaluated the performance of
controllers based on reinforcement learning in industrial chemical processes.
The control strategy directly affects the safety and cost of the process. The
better the performance of this strategy, the lower will be the production of
effluents and the consumption of input and energy. The reinforcement learning
algorithms showed excellent results for the first case study, the Van de Vusse s
reactor. However, to implement these algorithms in the Tennessee Eastman
Process chemical plant it was shown that more studies are needed. The weak
Markov property, the high dimensionality and peculiarities of the plant were
factors that made it difficult for the developed controllers to obtain satisfactory
results. For case study 1, the algorithms Q-Learning, Actor Critic TD, DQL,
DDPG, SAC and TD3 were evaluated, and for case study 2 the algorithms
CMA-ES, TRPO, PPO, DDPG, SAC and TD3 were evaluated.
|
2 |
Reinforcement Learning for Grid Voltage Stability with FACTSOldeen, Joakim, Sharma, Vishnu January 2020 (has links)
With increased penetration of renewable energy sources, maintaining equilibrium between production and consumption in the world’s electrical power systems (EPS) becomes more and more challenging. One way to increase stability and efficiency in an EPS is to use flexible alternating current transmission systems (FACTS). However, an EPS containing multiple FACTS-devices with overlapping areas of influence can lead to negative effects if the reference values they operate around are not updated with sufficient temporal resolution. The reference values are usually set manually by a system operator. The work in this master thesis has investigated how three different reinforcement learning (RL) algorithms can be used to set reference values automatically with higher temporal resolution than a system operator with the aim of increased voltage stability. The three RL algorithms – Q-learning, Deep Q-learning (DQN), and Twindelayed deep deterministic policy gradient (TD3) – were implemented in Python together with a 2-bus EPS test network acting as environment. The 2-bus EPS test network contain two FACTS devices: one for shunt compensation and one for series compensation. The results show that – with respect to reward – DQN was able to perform equally or better than non-RL cases 98.3 % of the time on the simulation test set, while corresponding values for TD3 and Q-learning were 87.3 % and 78.5 % respectively. DQN was able to achieve increased voltage stability on the test network while TD3 showed similar results except during lower loading levels. Q-learning decreased voltage stability on a substantial portion of the test set, even compared to a case without FACTS devices. To help with continued research and possible future real life implementation, a list of suggestions for future work has been established.
|
Page generated in 0.0472 seconds