• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 49
  • 22
  • 11
  • 4
  • 4
  • 3
  • 2
  • 2
  • 1
  • 1
  • 1
  • 1
  • Tagged with
  • 116
  • 87
  • 29
  • 24
  • 20
  • 18
  • 16
  • 14
  • 14
  • 13
  • 13
  • 12
  • 12
  • 12
  • 12
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
71

Návrh vestavěného systému pro řízení výukového modelu rotačního kyvadla / Design of embedded system for control of educational model of rotary pendulum

Jajtner, Jan January 2015 (has links)
The basic aim of this work is to improve existing model of rotational inverted pendulum by adding new mechanical features, implement the control algorithm to dsPIC microcontroller and develop related control electronics thus extending the functionality of current model while making it more compact. The work contains derivation of dynamic equations both by means of analytical methods and multi-body formalism of SimMechanics. These are used to design a state controller stabilizing the pendulum in inverse position. In addition, parameters of the system are being estimated experimentally. Swing-up controller is developed to drive the pendulum to unstable position. Various state estimators are added to controller to improve the control process while comparing their overall performance. The last point is devoted to development of superior state-automaton designed to switch between different regulating modes including fail-detection algorithms providing smooth operation of the model.
72

Comparing Control Strategies fora Satcom on the Move Antenna / Jämförelse av reglerstrategier för stabilisering av antenn i mobil satellittjänst

Hellberg, Joakim, Sundkvist, Axel January 2020 (has links)
Satellite communication is a widely known method for communicating with remote or disaster-strickenplaces. Sometimes, thecommunication can be a matter of life and death,and it is thus vital that it works well. For two-way communication (such as internet) it is necessary for the antenna on Earth to point towards the satellite with a pointing error not larger than a few tenths of a degree. For example, regulations decided by the authorities in the U.S. forbid pointing errors larger than 0.5°. In some cases the antenna on Earth has to be moving while satellite communication is maintained. Such cases can be when the antenna is mounted to a vehicle, and the antenna thus has to compensate for the vehicle’s movement in order to point at the satellite. This application of satellite communication is called Satcom on the Move (SOTM). By constructing a Simulink model of an entire SOTM-system, including vehicle dynamics, satellite position, signal behavior, sensors, and actuators, different control strategies can be compared. This thesis compares the performance of an H2- and an LQG-controller for a static initial acquisition case, and a dynamic inertial stabilization case. The static initial acquisition case is performed with a search algorithm (SpiralSearch) aiming to find the satellite signalin the shortest possible time for a given initial pointing error. The dynamic inertial stabilization case is performed by allowing the simulated vehicle to drive in a slalom pattern and over uneven grounds. The controllers are designed based on modern control theory.The conclusion of this thesis is that the H2-controller performs slightly better in the static testcase,whereastheLQG-controller performs slightly better in the dynamic test cases. However, the results are greatly influenced by the tuning of the controllers, meaning that the comparison is not necessarily true for the controllers rather than the tuning parameters. / Satellitkommunikation är en allmänt känd metod för att kommunicera med avlägsna eller katastrofdrabbade platser. Ibland kan kommunikationen vara en fråga om liv och död, och det är därför viktigt att den fungerar bra. För tvåvägskommunikation (som internet) är det nödvändigt att antennen på jorden pekar mot satelliten med ett pekfel som inte är större än några tiondels grader. Exempelvis finns det lagar i USA som förbjuder pekfel större än 0,5°. I vissa fall måste antennen på jorden röra sig medan satellitkommunikation upprätthålls. Sådana fall kan vara när antennen är monterad på ett fordon och antennen således måste kompensera för fordonets rörelse för att peka mot satelliten. Denna applikation av satellitkommunikation kallas Satcom on the Move(SOTM). Genom att konstruera en simulinkmodell av ett fullständigt SOTM-system, inklusive fordonsdynamik, satellitposition, signalbeteende, sensorer och ställdon, kan olika reglerstrategier jämföras. Denna avhandling jämför en H2 - och en LQG-regulator för ett statiskt fall, samt ett dynamiskt fall. Det statiska fallet utförs med en sökalgoritm (spiralsökning) som syftar till att hitta en specifik satellitsignal på kortast möjliga tid för ett givet initialt pekfel. Det dynamiska fallet utförs genom att låta det simulerade fordonet köra i slalommönster och på ojämnt underlag. Regulatorerna är designade baserade på modern kontrollteori.  Slutsatsen av denna avhandling är att H2-regulatorn presterar något bättre i det statiska testfallet, medan LQG-regulatorn presterar något bättre i de dynamiska testfallen. Resultaten påverkas emellertid kraftigt av de designade reglerparametrarna, vilket innebär att jämförelsen inte nödvändigtvis är sann för kontrollerna, utan snarare förde specifika reglerparametrarna.
73

Multi-Agent Coordination and Control under Information Asymmetry with Applications to Collective Load Transport

January 2018 (has links)
abstract: Coordination and control of Intelligent Agents as a team is considered in this thesis. Intelligent agents learn from experiences, and in times of uncertainty use the knowl- edge acquired to make decisions and accomplish their individual or team objectives. Agent objectives are defined using cost functions designed uniquely for the collective task being performed. Individual agent costs are coupled in such a way that group ob- jective is attained while minimizing individual costs. Information Asymmetry refers to situations where interacting agents have no knowledge or partial knowledge of cost functions of other agents. By virtue of their intelligence, i.e., by learning from past experiences agents learn cost functions of other agents, predict their responses and act adaptively to accomplish the team’s goal. Algorithms that agents use for learning others’ cost functions are called Learn- ing Algorithms, and algorithms agents use for computing actuation (control) which drives them towards their goal and minimize their cost functions are called Control Algorithms. Typically knowledge acquired using learning algorithms is used in con- trol algorithms for computing control signals. Learning and control algorithms are designed in such a way that the multi-agent system as a whole remains stable during learning and later at an equilibrium. An equilibrium is defined as the event/point where cost functions of all agents are optimized simultaneously. Cost functions are designed so that the equilibrium coincides with the goal state multi-agent system as a whole is trying to reach. In collective load transport, two or more agents (robots) carry a load from point A to point B in space. Robots could have different control preferences, for example, different actuation abilities, however, are still required to coordinate and perform load transport. Control preferences for each robot are characterized using a scalar parameter θ i unique to the robot being considered and unknown to other robots. With the aid of state and control input observations, agents learn control preferences of other agents, optimize individual costs and drive the multi-agent system to a goal state. Two learning and Control algorithms are presented. In the first algorithm(LCA- 1), an existing work, each agent optimizes a cost function similar to 1-step receding horizon optimal control problem for control. LCA-1 uses recursive least squares as the learning algorithm and guarantees complete learning in two time steps. LCA-1 is experimentally verified as part of this thesis. A novel learning and control algorithm (LCA-2) is proposed and verified in sim- ulations and on hardware. In LCA-2, each agent solves an infinite horizon linear quadratic regulator (LQR) problem for computing control. LCA-2 uses a learning al- gorithm similar to line search methods, and guarantees learning convergence to true values asymptotically. Simulations and hardware implementation show that the LCA-2 is stable for a variety of systems. Load transport is demonstrated using both the algorithms. Ex- periments running algorithm LCA-2 are able to resist disturbances and balance the assumed load better compared to LCA-1. / Dissertation/Thesis / Masters Thesis Electrical Engineering 2018
74

Linear-time invariant Positive Systems: Stabilization and the Servomechanism Problem

Roszak, Bartek 17 January 2012 (has links)
Positive systems, which carry the well known property of confining the state, output, and/or input variables to the nonnegative orphant, are of great practical importance, as the nonnegative property occurs quite frequently in numerous applications and in nature. These type of systems frequently occur in hydrology where they are used to model natural and artificial networks of reservoirs; in biology where they are used to describe the transportation, accumulation, and drainage processes of elements and compounds like hormones, glucose, insulin, and metals; and in stocking, industrial, and engineering systems where chemical reactions, heat exchanges, and distillation processes take place. The interest of this dissertation is in two key problems: positive stabilization and the positive servomechanism problem. In particular, this thesis outlines the necessary and sufficient conditions for the stabilization of positive linear time-invariant (LTI) systems using state feedback control, along with providing an algorithm for constructing such a stabilizing regulator. Moreover, the results on stabilization also encompass the two problems of the positive separation principle and stabilization via observer design. The second, and most emphasized, problem of this dissertation considers the positive servomechanism problem for both single-input single-output (SISO) and multi-input multi-output (MIMO) stable positive LTI systems. The study of the positive servomechanism problem focuses on the tracking problem of nonnegative constant reference signals for unknown/known stable SISO/MIMO positive LTI systems with nonnegative unmeasurable/measurable constant disturbances via switching tuning clamping regulators (TcR), linear quadratic clamping regulators (LTQcR), and ending with MPC control. Finally, all theoretical results on the positive servomechanism problem are justified via numerous experimental results on a waterworks system.
75

Linear-time invariant Positive Systems: Stabilization and the Servomechanism Problem

Roszak, Bartek 17 January 2012 (has links)
Positive systems, which carry the well known property of confining the state, output, and/or input variables to the nonnegative orphant, are of great practical importance, as the nonnegative property occurs quite frequently in numerous applications and in nature. These type of systems frequently occur in hydrology where they are used to model natural and artificial networks of reservoirs; in biology where they are used to describe the transportation, accumulation, and drainage processes of elements and compounds like hormones, glucose, insulin, and metals; and in stocking, industrial, and engineering systems where chemical reactions, heat exchanges, and distillation processes take place. The interest of this dissertation is in two key problems: positive stabilization and the positive servomechanism problem. In particular, this thesis outlines the necessary and sufficient conditions for the stabilization of positive linear time-invariant (LTI) systems using state feedback control, along with providing an algorithm for constructing such a stabilizing regulator. Moreover, the results on stabilization also encompass the two problems of the positive separation principle and stabilization via observer design. The second, and most emphasized, problem of this dissertation considers the positive servomechanism problem for both single-input single-output (SISO) and multi-input multi-output (MIMO) stable positive LTI systems. The study of the positive servomechanism problem focuses on the tracking problem of nonnegative constant reference signals for unknown/known stable SISO/MIMO positive LTI systems with nonnegative unmeasurable/measurable constant disturbances via switching tuning clamping regulators (TcR), linear quadratic clamping regulators (LTQcR), and ending with MPC control. Finally, all theoretical results on the positive servomechanism problem are justified via numerous experimental results on a waterworks system.
76

PWM/PFM Mixed Modulation Controller for Twin-Buck Converter

Fan, Bo-Wen 09 October 2012 (has links)
In the thesis, we apply the state average method to model the time-average linear dynamic equation, which is used to design a gain scheduled linear quadratic optimal controller. Because the standard modulation method of the twin-buck converter is PFM(Pulse-Frequency Modulation) and twin-buck converter owns the soft-switching characteristic, the voltage step-down ratio, that is, control force can not be lowered less than 0.5. For expanding the range of control force of converter, we modulate the converter by means of mixed modulation of PWM/PFM. With the former odulation method, we have to calculate the discharging time of synchronous switch taken by controller to achieve zero-voltage-transition (ZVT). In the last part of this thesis, we verify the practicability of the controller and modulation method through soft simulation coded by MATLAB and hardware implementation of FPGA driven by Verilog.
77

Demonstration Of A Stabilized Hovering Platform For Undergraduate Laboratory

Camlica, Fahri Bugra 01 February 2005 (has links) (PDF)
This research work covers the design, manufacture and testing of an unmanned aerial vehicle for the purpose of testing various control systems by undergraduate students in the laboratory environment. The aerial vehicle under consideration is a four-rotor propeller powered. Aluminum rod based mechanical structure is preferred. The stabilization of the hovering vehicle in its rotational axes in the air and navigation about the yaw axis are the accomplished goals of this study. The aerial vehicle is run in real time by using Matlab 6.5 Software&rsquo / s xPc module. The linear quadratic regulator and PD controllers are utilized to stabilize the aerial vehicle in its rotation axes. To eliminate the measurement noise generated by the sensors, low-pass second order transfer function is designed and its implementation to real time experiments is discussed.
78

Comparison of Modern Controls and Reinforcement Learning for Robust Control of Autonomously Backing Up Tractor-Trailers to Loading Docks

McDowell, Journey 01 November 2019 (has links)
Two controller performances are assessed for generalization in the path following task of autonomously backing up a tractor-trailer. Starting from random locations and orientations, paths are generated to loading docks with arbitrary pose using Dubins Curves. The combination vehicles can be varied in wheelbase, hitch length, weight distributions, and tire cornering stiffness. The closed form calculation of the gains for the Linear Quadratic Regulator (LQR) rely heavily on having an accurate model of the plant. However, real-world applications cannot expect to have an updated model for each new trailer. Finding alternative robust controllers when the trailer model is changed was the motivation of this research. Reinforcement learning, with neural networks as their function approximators, can allow for generalized control from its learned experience that is characterized by a scalar reward value. The Linear Quadratic Regulator and the Deep Deterministic Policy Gradient (DDPG) are compared for robust control when the trailer is changed. This investigation quantifies the capabilities and limitations of both controllers in simulation using a kinematic model. The controllers are evaluated for generalization by altering the kinematic model trailer wheelbase, hitch length, and velocity from the nominal case. In order to close the gap from simulation and reality, the control methods are also assessed with sensor noise and various controller frequencies. The root mean squared and maximum errors from the path are used as metrics, including the number of times the controllers cause the vehicle to jackknife or reach the goal. Considering the runs where the LQR did not cause the trailer to jackknife, the LQR tended to have slightly better precision. DDPG, however, controlled the trailer successfully on the paths where the LQR jackknifed. Reinforcement learning was found to sacrifice a short term reward, such as precision, to maximize the future expected reward like reaching the loading dock. The reinforcement learning agent learned a policy that imposed nonlinear constraints such that it never jackknifed, even when it wasn't the trailer it trained on.
79

Path Following Using Gain Scheduled LQR Control : with applications to a labyrinth game

Frid, Emil, Nilsson, Fredrik January 2020 (has links)
This master's thesis aims to make the BRIO Labyrinth Game autonomous and the main focus is on the development of a path following controller. A test-bench system is built using a modern edition of the classic game with the addition of a Raspberry Pi, a camera and two servos. A mathematical model of the ball and plate system is derived to be used in model based controllers. A method of using path projection on a cubic spline interpolated path to derive the reference states is explained. After that, three path following controllers are presented, a modified LQR, a Gain Scheduled LQR and a Gain Scheduled LQR with obstacle avoidance. The performances of these controllers are compared on an easy and a hard labyrinth level, both with respect to the ability of following the reference path and with respect to success rate of controlling the ball from start to finish without falling into any hole. All three controllers achieved a success rate over 90 % on the easy level. On the hard level the Gain Scheduled LQR achieved the highest success rate, 78.7 %, while the modified LQR achieved the lowest deviation from the reference path. The Gain Scheduled LQR with obstacle avoidance performed the worst in both regards. Overall, the results are promising and some insights gained when designing the controllers can possibly be useful for development of controllers in other applications as well.
80

Comparison of LQR and LQR-MRAC for Linear Tractor-Trailer Model

Gasik, Kevin Richard 01 May 2019 (has links) (PDF)
The United States trucking industry is immense. Employing over three million drivers and traveling to every city in the country. Semi-Trucks travel millions of miles each week and encompass roads that civilians travel on. These vehicles should be safe and allow efficient travel for all. Autonomous vehicles have been discussed in controls for many decades. Now fleets of autonomous vehicles are beginning their integration into society. The ability to create an autonomous system requires domain and system specific knowledge. Approaches to implement a fully autonomous vehicle have been developed using different techniques in control systems such as Kalman Filters, Neural Networks, Model Predictive Control, and Adaptive Control. However some of these control techniques require superb models, immense computing power, and terabytes of storage. One way to circumvent these issues is by the use of an adaptive control scheme. Adaptive control systems allow for an existing control system to self-tune its performance for unknown variables i.e. when an environment changes. In this thesis a LQR error state control system is derived and shown to maintain a magnitude of 15 cm of steady state error from the center-line of the road. In addition a proposed LQR-MRAC controller is used to test the robustness of a lane-keeping control system. The LQR-MRAC controller was able to improve its transient response peak error from the center-line of the road of the tractor and the trailer by 9.47 [cm] and 7.27 [cm]. The LQR-MRAC controller increased tractor steady state error by 0.4 [cm] and decreased trailer steady state error by 1 [cm]. The LQR-MRAC controller was able to outperform modern control techniques and can be used to improve the response of the tractor-trailer system to handle mass changes in its environment.

Page generated in 0.026 seconds