Spelling suggestions: "subject:"control stochastic"" "subject:"coontrol stochastic""
1 |
Stochastic inventory control in dynamic environmentsCao, Jie. January 2005 (has links)
Thesis (Ph.D.)--University of Florida, 2005. / Title from title page of source document. Document formatted into pages; contains 158 pages. Includes vita. Includes bibliographical references.
|
2 |
Stochastic joint replenishment problems : periodic review policiesAlrasheedi, Adel Fahad January 2015 (has links)
Operations Managers of manufacturing systems, distribution systems, and supply chains address lot sizing and scheduling problems as part of their duties. These problems are concerned with decisions related to the size of orders and their schedule. In general, products share or compete for common resources and thus require coordination of their replenishment decisions whether replenishment involves manufacturing operations or not. This research is concerned with joint replenishment problems (JRPs) which are part of multi-item lot sizing and scheduling problems in manufacturing and distribution systems in single echelon/stage systems. The principal purpose of this research is to develop three new periodic review policies for stochastic joint replenishment problem. It also highlights the lack of research on joint replenishment problems with different demand classes (DSJRP). Therefore, periodic review policy is developed for this problem where the inventory system faces different demand classes that are deterministic demand and stochastic demand. Heuristic Algorithms have been developed to obtain (near) optimal parameters for the three policies as well as a heuristic algorithm has been developed for DSJRP. Numerical tests against literature benchmarks have been presented.
|
3 |
Performance improvement for stochastic systems using state estimationZhou, Yuyang January 2018 (has links)
Recent developments in the practice control field have heightened the need for performance enhancement. The designed controller should not only guarantee the variables to follow their set point values, but also ought to focus on the performance of systems like quality, efficiency, etc. Hence, with the fact that the inevitable noises are widely existing during industry processes, the randomness of the tracking errors can be considered as a critical performance to improve further. In addition, due to the fact that some controllers for industrial processes cannot be changed once the parameters are designed, it is crucial to design a control algorithm to minimise the randomness of tracking error without changing the existing closed-loop control. In order to achieve the above objectives, a class of novel algorithms are proposed in this thesis for different types of systems with unmeasurable states. Without changing the existing closed-loop proportional integral(PI) controller, the compensative controller is extra added to reduce the randomness of tracking error. That means the PI controller can always guarantee the basic tracking property while the designed compensative signal can be removed any time without affecting the normal operation. Instead of just using the output information as PI controller, the compensative controller is designed to minimise the randomness of tracking error using estimated states information. Since most system states are unmeasurable, proper filters are employed to estimate the system states. Based on the stochastic system control theory, the criterion to characterise the system randomness are valid to different systems. Therefore a brief review about the basic concepts of stochastic system control contained in this thesis. More specifically, there are overshoot minimisation for linear deterministic systems, minimum variance control for linear Gaussian stochastic systems, and minimum entropy control for non-linear and non-Gaussian stochastic systems. Furthermore, the stability analysis of each system is discussed in mean-square sense. To illustrate the effectiveness of presented control methods, the simulation results are given. Finally, the works of this thesis are summarised and the future work towards to the limitations existed in the proposed algorithms are listed.
|
4 |
Lyapunov-based Control of Nonlinear Processes Systems: Handling Input Constraints and Stochastic UncertaintyMahmood, Maaz January 2020 (has links)
This thesis develops Lyapunov-based control techniques for nonlinear process systems subject to input constraints and stochastic uncertainty. The problems considered include those which focus on the null-controllable region (NCR) for unstable systems. The NCR is the set of states in the state-space from where controllability to desired equilibrium point is possible. For unstable systems, the presence of input constraints induces bounds on the NCR and thereby limits the ability of any controller to steer the system at will. Common approaches for applying control to such systems utilize Control Lyapunov Functions (CLFs) . Such functions can be used for both designing controllers and also preforming closed--loop stability analysis. Existing CLF-based controllers result in closed--loop stability regions that are subsets of the NCR and do not guarantee closed--loop stability from the entire NCR. In effort to mitigate this shortcoming, we introduce a special type of CLF known as a Constrained Control Lyapunov Function (CCLF) which accounts for the presence of input constraints in its definition. CCLFs result in closed--loop stability regions which correspond to the NCR. We demonstrate how CCLFs can be constructed using a function defined by the NCR boundary trajectories for varying values of the available control capacity. We first consider linear systems and utilize available explicit characterization of the NCR to construct CCLFs. We then develop a Model Predictive Control (MPC) design which utilizes this CCLF to achieve stability from the entire NCR for linear anti-stable systems. We then consider the problem of nonlinear systems where explicit characterizations of the NCR boundary are not available. To do so, the problem of boundary construction is considered and an algorithm which is computationally tractable is developed and results in the construction of the boundary trajectories. This algorithm utilizes properties of the boundary pertaining to control equilibrium points to initialize the controllability minimum principle. We then turn to the problem of closed--loop stabilization from the entire NCR for nonlinear systems. Following a similar development as the CCLF construction for linear systems, we establish the validity of the use of the NCR as a CCLF for nonlinear systems. This development involves relaxing the conditions which define a classical CLF and results in CCLF-based control achieving stability to an to an equilibrium manifold. To achieve stabilization from the entire NCR, the CCLF-based control design is coupled with a classical CLF-based controller in a hybrid control framework.
In the final part of this thesis, we consider nonlinear systems subject to stochastic uncertainty. Here we design a Lyapunov-based model predictive controller (LMPC) which provides an explicitly characterized region from where stability can be probabilistically obtained. The design exploits the constraint-handling ability of model predictive controllers in order to inherent the stabilization in probability characterization of a Lyapunov-based feedback controller. All the proposed control designs along with the NCR boundary computation are illustrated using simulation results. / Thesis / Doctor of Philosophy (PhD)
|
5 |
Robust & stochastic model predictive controlCheng, Qifeng January 2012 (has links)
In the thesis, two different model predictive control (MPC) strategies are investigated for linear systems with uncertainty in the presence of constraints: namely robust MPC and stochastic MPC. Firstly, a Youla Parameter is integrated into an efficient robust MPC algorithm. It is demonstrated that even in the constrained cases, the use of the Youla Parameter can desensitize the costs to the effect of uncertainty while not affecting the nominal performance, and hence it strengthens the robustness of the MPC strategy. Since the controller u = K x + c can offer many advantages and is used across the thesis, the work provides two solutions to the problem when the unconstrained nominal LQ-optimal feedback K cannot stabilise the whole class of system models. The work develops two stochastic tube approaches to account for probabilistic constraints. By using a semi closed-loop paradigm, the nominal and the error dynamics are analyzed separately, and this makes it possible to compute the tube scalings offline. First, ellipsoidal tubes are considered. The evolution for the tube scalings is simplified to be affine and using Markov Chain model, the probabilistic tube scalings can be calculated to tighten the constraints on the nominal. The online algorithm can be formulated into a quadratic programming (QP) problem and the MPC strategy is closed-loop stable. Following that, a direct way to compute the tube scalings is studied. It makes use of the information on the distribution of the uncertainty explicitly. The tubes do not take a particular shape but are defined implicitly by tightened constraints. This stochastic MPC strategy leads to a non-conservative performance in the sense that the probability of constraint violation can be as large as is allowed. It also ensures the recursive feasibility and closed-loop stability, and is extended to the output feedback case.
|
6 |
Stochastic optimization models for service and manufacturing industry /Denton, Brian T. January 1900 (has links)
Thesis (Ph.D.)--McMaster University, 2001 / Includes bibliographical references (leaves 144-156). Also available via World Wide Web.
|
7 |
Optimal exposure strategies in insuranceMartínez Sosa, José January 2018 (has links)
Two optimisation problems were considered, in which market exposure is indirectly controlled. The first one models the capital of a company and an independent portfolio of new businesses, each one represented by a Cram\'r-Lundberg process. The company can choose the proportion of new business it wants to take on and can alter this proportion over time. Here the objective is to find a strategy that maximises the survival probability. We use a point processes framework to deal with the impact of an adapted strategy in the intensity of the new business. We prove that when Cram\'{e}r-Lundberg processes with exponentially distributed claims, it is optimal to choose a threshold type strategy, where the company switches between owning all new businesses or none depending on the capital level. For this type of processes that change both drift and jump measure when crossing the constant threshold, we solve the one and two-sided exit problems. This optimisation problem is also solved when the capital of the company and the new business are modelled by spectrally positive L\'vy processes of bounded variation. Here the one-sided exit problem is solved and we prove optimality of the same type of threshold strategy for any jump distribution. The second problem is a stochastic variation of the work done by Taylor about underwriting in a competitive market. Taylor maximised discounted future cash flows over a finite time horizon in a discrete time setting when the change of exposure from one period to the next has a multiplicative form involving the company's premium and the market average premium. The control is the company's premium strategy over a the mentioned finite time horizon. Taylor's work opened a rich line of research, and we discuss some of it. In contrast with Taylor's model, we consider the market average premium to be a Markov chain instead of a deterministic vector. This allows to model uncertainty in future conditions of the market. We also consider an infinite time horizon instead of finite. This solves the time dependency in Taylor's optimal strategies that were giving unrealistic results. Our main result is a formula to calculate explicitly the value function of a specific class of pricing strategies. Further we explore concrete examples numerically. We find a mix of optimal strategies where in some examples the company should follow the market while in other cases should go against it.
|
8 |
Strategies in robust and stochastic model predictive controlMunoz Carpintero, Diego Alejandro January 2014 (has links)
The presence of uncertainty in model predictive control (MPC) has been accounted for using two types of approaches: robust MPC (RMPC) and stochastic MPC (SMPC). Ideal RMPC and SMPC formulations consider closed-loop optimal control problems whose exact solution, via dynamic programming, is intractable for most systems. Much effort then has been devoted to find good compromises between the degree of optimality and computational tractability. This thesis expands on this effort and presents robust and stochastic MPC strategies with reduced online computational requirements where the conservativeness incurred is made as small as conveniently possible. Two RMPC strategies are proposed for linear systems under additive uncertainty. They are based on a recently proposed approach which uses a triangular prediction structure and a non-linear control policy. One strategy considers a transference of part of the computation of the control policy to an offline stage. The other strategy considers a modification of the prediction structure so that it has a striped structure and the disturbance compensation extends throughout an infinite horizon. An RMPC strategy for linear systems with additive and multiplicative uncertainty is also presented. It considers polytopic dynamics that are designed so as to maximize the volume of an invariant ellipsoid, and are used in a dual-mode prediction scheme where constraint satisfaction is ensured by an approach based on a variation of Farkas' Lemma. Finally, two SMPC strategies for linear systems with additive uncertainty are presented, which use an affine-in-the-disturbances control policy with a striped structure. One strategy considers an offline sequential design of the gains of the control policy, while these are variables in the online optimization in the other. Control theoretic properties, such as recursive feasibility and stability, are studied for all the proposed strategies. Numerical comparisons show that the proposed algorithms can provide a convenient compromise in terms of computational demands and control authority.
|
9 |
Multiplicative robust and stochastic MPC with application to wind turbine controlEvans, Martin A. January 2014 (has links)
A robust model predictive control algorithm is presented that explicitly handles multiplicative, or parametric, uncertainty in linear discrete models over a finite horizon. The uncertainty in the predicted future states and inputs is bounded by polytopes. The computational cost of running the controller is reduced by calculating matrices offline that provide a means to construct outer approximations to robust constraints to be applied online. The robust algorithm is extended to problems of uncertain models with an allowed probability of violation of constraints. The probabilistic degrees of satisfaction are approximated by one-step ahead sampling, with a greedy solution to the resulting mixed integer problem. An algorithm is given to enlarge a robustly invariant terminal set to exploit the probabilistic constraints. Exponential basis functions are used to create a Robust MPC algorithm for which the predictions are defined over the infinite horizon. The control degrees of freedom are weights that define the bounds on the state and input uncertainty when multiplied by the basis functions. The controller handles multiplicative and additive uncertainty. Robust MPC is applied to the problem of wind turbine control. Rotor speed and tower oscillations are controlled by a low sample rate robust predictive controller. The prediction model has multiplicative and additive uncertainty due to the uncertainty in short-term future wind speeds and in model linearisation. Robust MPC is compared to nominal MPC by means of a high-fidelity numerical simulation of a wind turbine under the two controllers in a wide range of simulated wind conditions.
|
10 |
Optimal control of hybrid electric vehicles for real-world driving patternsVagg, Christopher January 2015 (has links)
Optimal control of energy flows in a Hybrid Electric Vehicle (HEV) is crucial to maximising the benefits of hybridisation. The problem is complex because the optimal solution depends on future power demands, which are often unknown. Stochastic Dynamic Programming (SDP) is among the most advanced control optimisation algorithms proposed and incorporates a stochastic representation of the future. The potential of a fully developed SDP controller has not yet been demonstrated on a real vehicle; this work presents what is believed to be the most concerted and complete attempt to do so. In characterising typical driving patterns of the target vehicles this work included the development and trial of an eco-driving driver assistance system; this aims to reduce fuel consumption by encouraging reduced rates of acceleration and efficient use of the gears via visual and audible feedback. Field trials were undertaken using 15 light commercial vehicles over four weeks covering a total of 39,300 km. Average fuel savings of 7.6% and up to 12% were demonstrated. Data from the trials were used to assess the degree to which various legislative test cycles represent the vehicles’ real-world use and the LA92 cycle was found to be the closest statistical match. Various practical considerations in SDP controller development are addressed such as the choice of discount factor and how charge sustaining characteristics of the policy can be examined and adjusted. These contributions are collated into a method for robust implementation of the SDP algorithm. Most reported HEV controllers neglect the significant complications resulting from extensive use of the electrical powertrain at high power, such as increased heat generation and battery stress. In this work a novel cost function incorporates the square of battery C-rate as an indicator of electric powertrain stress, with the aim of lessening the affliction of real-world concerns such as temperatures and battery health. Controllers were tested in simulation and then implemented on a test vehicle; the challenges encountered in doing so are discussed. Testing was performed on a chassis dynamometer using the LA92 test cycle and the novel cost function was found to enable the SDP algorithm to reduce electrical powertrain stress by 13% without sacrificing any fuel savings, which is likely to be beneficial to battery health.
|
Page generated in 0.0636 seconds