Spelling suggestions: "subject:"[een] PREDICTIVE CONTROL"" "subject:"[enn] PREDICTIVE CONTROL""
11 |
Autonomous Overtaking with Learning Model Predictive Control / Autonom Omkörning med Learning Model Predictive ControlBengtsson, Ivar January 2020 (has links)
We review recent research into trajectory planning for autonomous overtaking to understand existing challenges. Then, the recently developed framework Learning Model Predictive Control (LMPC) is presented as a suitable method to iteratively improve an overtaking manoeuvre each time it is performed. We present recent extensions to the LMPC framework to make it applicable to overtaking. Furthermore, we also present two alternative modelling approaches with the intention of reducing computational complexity of the optimization problems solved by the controller. All proposed frameworks are built from scratch in Python3 and simulated for evaluation purposes. Optimization problems are modelled and solved using the Gurobi 9.0 Python API gurobipy. The results show that LMPC can be successfully applied to the overtaking problem, with improved performance at each iteration. However, the first proposed alternative modelling approach does not improve computational times as was the intention. The second one does but fails in other areas. / Vi går igenom ny forskning inom trajectory planning för autonom omkörning för att förstå de utmaningar som finns. Därefter föreslås ramverket Learning Model Predictive Control (LMPC) som en lämplig metod för att iterativt förbättra en omkörning vid varje utförande. Vi tar upp utvidgningar av LMPC-ramverket för att göra det applicerbart på omkörningsproblem. Dessutom presenterar vi också två alternativa modelleringar i syfte att minska optimeringsproblemens komplexitet. Alla tre angreppssätt har byggts från grunden i Python3 och simulerats i utvärderingssyfte. Optimeringsproblem har modellerats och lösts med programvaran Gurobi 9.0s python-API gurobipy. Resultaten visar att LMPC kan tillämpas framgångsrikt på omkörningsproblem, med förbättrat utförande vid varje iteration. Den första alternativa modelleringen minskar inte beräkningstiden vilket var dess syfte. Det gör däremot den andra alternativa modelleringen som dock fungerar sämre i andra avseenden.
|
12 |
Autonomous learning of domain models from probability distribution clustersSłowiński, Witold January 2014 (has links)
Nontrivial domains can be difficult to understand and the task of encoding a model of such a domain can be difficult for a human expert, which is one of the fundamental problems of knowledge acquisition. Model learning provides a way to address this problem by allowing a predictive model of the domain's dynamics to be learnt algorithmically, without human supervision. Such models can provide insight about the domain to a human or aid in automated planning or reinforcement learning. This dissertation addresses the problem of how to learn a model of a continuous, dynamic domain, from sensory observations, through the discretisation of its continuous state space. The learning process is unsupervised in that there are no predefined goals, and it assumes no prior knowledge of the environment. Its outcome is a model consisting of a set of predictive cause-and-effect rules which describe changes in related variables over brief periods of time. We present a novel method for learning such a model, which is centred around the idea of discretising the state space by identifying clusters of uniform density in the probability density function of variables, which correspond to meaningful features of the state space. We show that using this method it is possible to learn models exhibiting predictive power. Secondly, we show that applying this discretisation process to two-dimensional vector variables in addition to scalar variables yields a better model than only applying it to scalar variables and we describe novel algorithms and data structures for discretising one- and two-dimensional spaces from observations. Finally, we demonstrate that this method can be useful for planning or decision making in some domains where the state space exhibits stable regions of high probability and transitional regions of lesser probability. We provide evidence for these claims by evaluating the model learning algorithm in two dynamic, continuous domains involving simulated physics: the OpenArena computer game and a two-dimensional simulation of a bouncing ball falling onto uneven terrain.
|
13 |
Coordinated control of hot strip tandem rolling millMcNeilly, Gordon January 1999 (has links)
No description available.
|
14 |
Fault-tolerant predictive control : a Gaussian process model based approachYang, Xiaoke January 2015 (has links)
No description available.
|
15 |
Learning techniques in receding horizon control and cooperative control. / CUHK electronic theses & dissertations collectionJanuary 2010 (has links)
Cooperative control of networked systems (or multi-agent systems) has attracted much attention during the past few years. But most of the existing results focus on first order and second order leaderless consensus problems with linear dynamics. The second part of this dissertation solves a higher-order synchronization problem for cooperative nonlinear systems with an active leader. The communication network considered is a weighted directed graph with fixed topology. Each agent is modeled by a higher-order nonlinear system with the nonlinear dynamics unknown. External unknown disturbances perturb each agent. The leader agent is modeled as a higher-order non-autonomous nonlinear system. It acts as a command generator and can only give commands to a small portion of the networked group. A robust adaptive neural network controller is designed for each agent. Neural network learning algorithms are given such that all nodes ultimately synchronize to the leader node with a small residual error. Moreover, these controllers are totally distributed in the sense that each controller only requires its own information and its neighbors' information. / Receding horizon control (RHC), also called model predictive control (MPC), is a suboptimal control scheme over an infinite horizon that is determined by solving a finite horizon open-loop optimal control problem repeatedly. It has widespread applications in industry. Reinforcement learning (RL) is a computational intelligence method in which an optimal control policy is learned over time by evaluating the performance of suboptimal control policies. In this dissertation it is shown that reinforcement learning techniques can significantly improve the behavior of RHC. Specifically, RL methods are used to add a learning feature to RHC. It is shown that keeping track of the value learned at the previous iteration and using it as the new terminal cost for RHC can overcome traditional strong requirements for RHC stability, such as that the terminal cost be a control Lyapunov function, or that the horizon length be greater than some bound. We propose improved RHC algorithms, called updated terminal cost receding horizon control (UTC-RHC), first in the framework of discrete-time linear systems and then in the framework of continuous-time linear systems. For both cases, we show the uniform exponential stability of the closed-loop system can be guaranteed under very mild conditions. Moreover, unlike RHC, the UTC-RHC control gain approaches the optimal policy associated with the infinite horizon optimal control problem. To show these properties, non-standard Lyapunov functions are introduced for both discrete-time case and continuous-time case. / Two topics of modern control are investigated in this dissertation, namely receding horizon control (RHC) and cooperative control of networked systems. We apply learning techniques to these two topics. Specifically, we incorporate the reinforcement learning concept into the standard receding horizon control, yielding a new RHC algorithm, and relax the stability constraints required for standard RHC. For the second topic, we apply neural adaptive control in synchronization of the networked nonlinear systems and propose distributed robust adaptive controllers such that all nodes synchronize to a leader node. / Zhang, Hongwei. / Adviser: Jie Huang. / Source: Dissertation Abstracts International, Volume: 72-04, Section: B, page: . / Thesis (Ph.D.)--Chinese University of Hong Kong, 2010. / Includes bibliographical references (leaves 99-105). / Electronic reproduction. Hong Kong : Chinese University of Hong Kong, [2012] System requirements: Adobe Acrobat Reader. Available via World Wide Web. / Electronic reproduction. Ann Arbor, MI : ProQuest Information and Learning Company, [200-] System requirements: Adobe Acrobat Reader. Available via World Wide Web. / Abstract also in Chinese.
|
16 |
Identification and control of nonlinear processes with static nonlinearities.Chan, Kwong Ho, Chemical Sciences & Engineering, Faculty of Engineering, UNSW January 2007 (has links)
Process control has been playing an increasingly important role in many industrial applications as an effective way to improve product quality, process costeffectiveness and safety. Simple linear dynamic models are used extensively in process control practice, but they are limited to the type of process behavior they can approximate. It is well-documented that simple nonlinear models can often provide much better approximations to process dynamics than linear models. It is evident that there is a potential of significant improvement of control quality through the implementation of the model-based control procedures. However, such control applications are still not widely implemented because mathematical process models in model-based control could be very difficult and expensive to obtain due to the complexity of those systems and poor understanding of the underlying physics. The main objective of this thesis is to develop new approaches to modeling and control of nonlinear processes. In this thesis, the multivariable nonlinear processes are approximated using a model with a static nonlinearity and a linear dynamics. In particular, the Hammerstein model structure, where the nonlinearity is on the input, is used. Cardinal spline functions are used to identify the multivariable input nonlinearity. Highlycoupled nonlinearity can also be identified due to flexibility and versatility of cardinal spline functions. An approach that can be used to identify both the nonlinearity and linear dynamics in a single step has been developed. The condition of persistent excitation has also been derived. Nonlinear control design approaches for the above models are then developed in this thesis based on: (1) a nonlinear compensator; (2) the extended internal model control (IMC); and (3) the model predictive control (MPC) framework. The concept of passivity is used to guarantee the stability of the closed-loop system of each of the approaches. In the nonlinear compensator approach, the passivity of the process is recovered using an appropriate static nonlinearity. The non-passive linear system is passified using a feedforward system, so that the passified overall system can be stabilized by a passive linear controller with the nonlinear compensator. In the extended IMC approach, dynamic inverses are used for both the input nonlinearity and linear dynamics. The concept of passive systems and the passivity-based stability conditions are used to obtain the invertible approximations of the subsystems and guarantee the stability of the nonlinear closed-loop system. In the MPC approach, a numerical inverse is implemented. The condition for which the numerical inversion is guaranteed to converge is derived. Based on these conditions, the input space in which the numerical inverse can be obtained is identified. This constitutes new constraints on the input space, in addition to the physical input constraints. The total input constraints are transformed into linear input constraints using polytopic descriptions and incorporated in the MPC design.
|
17 |
Continuous-time Model Predictive ControlTruong, Quan, trunongluongquan@yahoo.com.au January 2007 (has links)
Model Predictive Control (MPC) refers to a class of algorithms that optimize the future behavior of the plant subject to operational constraints [46]. The merits of the class algorithms include its ability to handle imposed hard constraints on the system and perform on-line optimization. This thesis investigates design and implementation of continuous time model predictive control using Laguerre polynomials and extends the design ap- proaches proposed in [43] to include intermittent predictive control, as well as to include the case of the nonlinear predictive control. In the Intermittent Predictive Control, the Laguerre functions are used to describe the control trajectories between two sample points to save the com- putational time and make the implementation feasible in the situation of the fast sampling of a dynamic system. In the nonlinear predictive control, the Laguerre polynomials are used to describe the trajectories of the nonlinear control signals so that the reced- ing horizon control principle are applied in the design with respect to the nonlinear system constraints. In addition, the thesis reviews several Quadratic Programming methods and compares their performances in the implementation of the predictive control. The thesis also presents simulation results of predictive control of the autonomous underwater vehicle and the water tank.
|
18 |
Robust Repetitive Model Predictive Control for Systems with Uncertain Period-TimeGupta, Manish 12 April 2004 (has links)
Repetitive Model Predictive Control (RMPC) incorporates the idea of Repetitive Control (RC) into Model Predictive Control (MPC) to take full advantage of the constraint handling, multivariable control features of MPC in periodic processes. The RMPC achieves perfect asymptotic tracking/rejection in periodic processes, provided that the period length used in the control formulation matches the actual period of the reference/disturbance exactly. Even a small mismatch between the actual period of process and the controller period can deteriorate the RMPC performance significantly. The period mismatch occurs either from an inaccurate estimation of actual frequency of disturbance due to resolution limit or from trying to force the controller period to be an integer multiple of sampling time. An extension of RMPC called Robust Repetitive Model Predictive Control (R-RMPC) is proposed for such cases where period length cannot be predetermined accurately, or where period is not an integer multiple of sampling time. This robust RMPC borrows the idea of using weighted, multiple memory loops in RC for robustness enhancement. The modified RMPC is more robust in the sense that small changes in period length do not diminish the tracking/rejection properties by much. Simulation results show that R-RMPC achieves significant improvement over the standard RMPC in case of a slight period mismatch. The effectiveness of this Robust RMPC is demonstrated by applying it to a mechanical motion tracking machine whose function is to follow a constant trajectory while rejecting periodic disturbances of an uncertain period.
|
19 |
Integrated tracking and guidanceBest, Robert Andrew January 1996 (has links)
No description available.
|
20 |
Process control applications of long-range predictionLambert, E. P. January 1987 (has links)
The recent Generalised Predictive Control algorithm (Clarke et al, 1984,87) is a self-tuning/ adaptive control algorithm that is based upon long-range prediction, and is thus claimed to be particularly suitable for process control application. The complicated nature of GPC prevents the application of standard analytical techniques. Therefore an alternative technique is developed where an equivalent closed loop expression is repeatedly calculated for various control scenarios. The properties of GPC are investigated and, in particular, it is shown that 'default' values for GPC's design parameters give a mean-level type of control law that can reasonably be expected to provide robust control for a wide variety of processes. Two successful industrial applications of GPC are then reported. The first series of trials involve the SISO control of soap moisture for a full-scale drying process. After a brief period of PRBS assisted self-tuning default GPC control performance is shown to be significantly better than the existing manual control, despite the presence of a large time-delay, poor measurements and severe production restrictions. The second application concerns the MIMO inner loop control of a spray drying tower using two types of GPC controller: full multivariable MGPC, and multi-loop DGPC. Again after only a brief period of PRBS assisted self-tuning both provide dramatically superior control compared to the existing multi-loop gain-scheduled PID control scheme. In particular the use of MGPC successfully avoids any requirement for a priori knowledge of the process time-delay structure or input-output pairing. The decoupling performance of MGPC is improved by scaling and that of DGPC by the use of feed-forward. The practical effectiveness of GPC's design parameters (e.g. P, T and λ) is also demonstrated. On the estimation side of adaptive control the current state-of-the-art algorithms are reviewed and shown to suffer from problems such as 'blowup', parameter drift and sensitivity to unmeasurable load disturbances. To overcome these problems two novel estimation algorithms (CLS and DLS) are developed that extend the RLS cost-function to include weighting of estimated parameters. The exploitation of the 'fault detection' properties of CLS is proposed as a more realistic estimation philosophy for adaptive control than the 'continuous retention of adaptivity'.
|
Page generated in 0.0271 seconds