1 |
Utveckling av Reglersystem för ett Labyrintspel : Modellbaserad design i praktiken / Development of an Automatic Control System for a Labyrinth GameNådin, Mikael, Ericsson, Kristian January 2019 (has links)
This thesis evaluates two automatic control systems, PID and LQ, for the purpose of controlling the steel marble in a Brio labyrinth game. The objective has been for these automatic control strategies to bring the marble through the labyrinth and examine how well they handle this problem. A mathematical model of the problem was derived and a detailed model of the labyrinth game was established in Mathworks software Simscape to streamline the development of the structural design and control system. Based on the Simscape model, the labyrinth game was modified with hardware necessary to perform the task. Before the development of the control system commenced, tests were carried out to study the marbles movement in the two models compared with the labyrinth game. This proved that the friction in the labyrinth game is non-linear compared to the models which both showed similar behavior. The control system was then implemented to be tested and evaluated in the Simscape model as well as the labyrinth game. In the Simscape model, they both perform equally well and the PID- and LQ-controller can easily bring the marble through the labyrinth. In the labyrinth game, the LQ controller succeeds in bringing the marble through the labyrinth in 45\% of cases, while the corresponding for the PID controller is 25\%. The LQ controller was the one that generally had the best performance and was able to handle the marbles movement despite the non-linearities. The PID controller's performance was poorer, which is largely due to said non-linearities but also noise in the system, which the LQ controller is not affected as much by. The study shows that non-linearities such as friction are difficult to model. The model-based design is a good method but can be time consuming and the end result can make it difficult to motivate in many cases.
|
2 |
LEAP, A Platform for Evaluation of Control Algorithms / Labyrintbaserad plattform för algoritmutvärderingÖfjäll, Kristoffer January 2010 (has links)
<p>Most people are familiar with the BRIO labyrinth game and the challenge of guiding the ball through the maze. The goal of this project was to use this game to create a platform for evaluation of control algorithms. The platform was used to evaluate a few different controlling algorithms, both traditional automatic control algorithms as well as algorithms based on online incremental learning.</p><p>The game was fitted with servo actuators for tilting the maze. A camera together with computer vision algorithms were used to estimate the state of the game. The evaluated controlling algorithm had the task of calculating a proper control signal, given the estimated state of the game.</p><p>The evaluated learning systems used traditional control algorithms to provide initial training data. After initial training, the systems learned from their own actions and after a while they outperformed the controller used to provide initial training.</p>
|
3 |
LEAP, A Platform for Evaluation of Control Algorithms / Labyrintbaserad plattform för algoritmutvärderingÖfjäll, Kristoffer January 2010 (has links)
Most people are familiar with the BRIO labyrinth game and the challenge of guiding the ball through the maze. The goal of this project was to use this game to create a platform for evaluation of control algorithms. The platform was used to evaluate a few different controlling algorithms, both traditional automatic control algorithms as well as algorithms based on online incremental learning. The game was fitted with servo actuators for tilting the maze. A camera together with computer vision algorithms were used to estimate the state of the game. The evaluated controlling algorithm had the task of calculating a proper control signal, given the estimated state of the game. The evaluated learning systems used traditional control algorithms to provide initial training data. After initial training, the systems learned from their own actions and after a while they outperformed the controller used to provide initial training.
|
Page generated in 0.0473 seconds