Spelling suggestions: "subject:"receding horizon"" "subject:"receding orizon""
11 |
Optimization of reservoir waterfloodingGrema, Alhaji Shehu January 2014 (has links)
Waterflooding is a common type of oil recovery techniques where water is pumped into the reservoir for increased productivity. Reservoir states change with time, as such, different injection and production settings will be required to lead the process to optimal operation which is actually a dynamic optimization problem. This could be solved through optimal control techniques which traditionally can only provide an open-loop solution. However, this solution is not appropriate for reservoir production due to numerous uncertain properties involved. Models that are updated through the current industrial practice of ‘history matching’ may fail to predict reality correctly and therefore, solutions based on history-matched models may be suboptimal or non-optimal at all. Due to its ability in counteracting the effects uncertainties, direct feedback control has been proposed recently for optimal waterflooding operations. In this work, two feedback approaches were developed for waterflooding process optimization. The first approach is based on the principle of receding horizon control (RHC) while the second is a new dynamic optimization method developed from the technique of self-optimizing control (SOC). For the SOC methodology, appropriate controlled variables (CVs) as combinations of measurement histories and manipulated variables are first derived through regression based on simulation data obtained from a nominal model. Then the optimal feedback control law was represented as a linear function of measurement histories from the CVs obtained. Based on simulation studies, the RHC approach was found to be very sensitive to uncertainties when the nominal model differed significantly from the conceived real reservoir. The SOC methodology on the other hand, was shown to achieve an operational profit with only 2% worse than the true optimal control, but 30% better than the open-loop optimal control under the same uncertainties. The simplicity of the developed SOC approach coupled with its robustness to handle uncertainties proved its potentials to real industrial applications.
|
12 |
Dynamically Hedging Oil and Currency Futures Using Receding Horizontal Control and Stochastic ProgrammingCottrell, Paul Edward 01 January 2015 (has links)
There is a lack of research in the area of hedging future contracts, especially in illiquid or very volatile market conditions. It is important to understand the volatility of the oil and currency markets because reduced fluctuations in these markets could lead to better hedging performance. This study compared different hedging methods by using a hedging error metric, supplementing the Receding Horizontal Control and Stochastic Programming (RHCSP) method by utilizing the London Interbank Offered Rate with the Levy process. The RHCSP hedging method was investigated to determine if improved hedging error was accomplished compared to the Black-Scholes, Leland, and Whalley and Wilmott methods when applied on simulated, oil, and currency futures markets. A modified RHCSP method was also investigated to determine if this method could significantly reduce hedging error under extreme market illiquidity conditions when applied on simulated, oil, and currency futures markets. This quantitative study used chaos theory and emergence for its theoretical foundation. An experimental research method was utilized for this study with a sample size of 506 hedging errors pertaining to historical and simulation data. The historical data were from January 1, 2005 through December 31, 2012. The modified RHCSP method was found to significantly reduce hedging error for the oil and currency market futures by the use of a 2-way ANOVA with a t test and post hoc Tukey test. This study promotes positive social change by identifying better risk controls for investment portfolios and illustrating how to benefit from high volatility in markets. Economists, professional investment managers, and independent investors could benefit from the findings of this study.
|
13 |
Optimization of Reservoir WaterfloodingGrema, Alhaji Shehu 10 1900 (has links)
Waterflooding is a common type of oil recovery techniques where water is
pumped into the reservoir for increased productivity. Reservoir states change
with time, as such, different injection and production settings will be required to
lead the process to optimal operation which is actually a dynamic optimization
problem. This could be solved through optimal control techniques which
traditionally can only provide an open-loop solution. However, this solution is
not appropriate for reservoir production due to numerous uncertain properties
involved. Models that are updated through the current industrial practice of
‘history matching’ may fail to predict reality correctly and therefore, solutions
based on history-matched models may be suboptimal or non-optimal at all.
Due to its ability in counteracting the effects uncertainties, direct feedback
control has been proposed recently for optimal waterflooding operations. In this
work, two feedback approaches were developed for waterflooding process
optimization. The first approach is based on the principle of receding horizon
control (RHC) while the second is a new dynamic optimization method
developed from the technique of self-optimizing control (SOC). For the SOC
methodology, appropriate controlled variables (CVs) as combinations of
measurement histories and manipulated variables are first derived through
regression based on simulation data obtained from a nominal model. Then the
optimal feedback control law was represented as a linear function of
measurement histories from the CVs obtained.
Based on simulation studies, the RHC approach was found to be very sensitive
to uncertainties when the nominal model differed significantly from the
conceived real reservoir. The SOC methodology on the other hand, was shown
to achieve an operational profit with only 2% worse than the true optimal
control, but 30% better than the open-loop optimal control under the same
uncertainties. The simplicity of the developed SOC approach coupled with its
robustness to handle uncertainties proved its potentials to real industrial
applications.
|
14 |
Modeling Air Combat with Influence DiagramsBergdahl, Christopher January 2013 (has links)
Air combat is a complex situation, training for it and analysis of possible tactics are time consuming and expensive. In order to circumvent those problems, mathematical models of air combat can be used. This thesis presents air combat as a one-on-one influence diagram game where the influence diagram allows the dynamics of the aircraft, the preferences of the pilots and the uncertainty of decision making in a structural and transparent way to be taken into account. To obtain the players’ game optimal control sequence with respect to their preferences, the influence diagram has to be solved. This is done by truncating the diagram with a moving horizon technique and determining and implementing the optimal controls for a dynamic game which only lasts a few time steps. The result is a working air combat model, where a player estimates the probability that it resides in any of four possible states. The pilot’s preferences are modeled by utility functions, one for each possible state. In each time step, the players are maximizing the cumulative sum of the utilities for each state which each possible action gives. These are weighted with the corresponding probabilities. The model is demonstrated and evaluated in a few interesting aspects. The presented model offers a way of analyzing air combat tactics and maneuvering as well as a way of making autonomous decisions in for example air combat simulators.
|
15 |
Optimal control for data harvesting and signal model estimationZhu, Yancheng 29 January 2025 (has links)
2025 / Over the last decade, the application of Wireless Sensor Networks (WSNs) has surged in fields such as environmental monitoring, human health, and smart cities. With this wealth of technologies comes the challenge of how to extract volumes of data collected by such sensor nodes distributed over large, often remote, geographical regions. Data harvesting is the problem of extracting measurements from the remote nodes of WSNs using mobile agents such as ground vehicles or drones. The use of mobile agents can significantly reduce the energy consumption of sensor nodes relative to other modes of extracting the data, extending the lifetime and capabilities of the WSN. Moreover, in remote areas where GPS may not be feasible due to limited power resources on the sensor nodes, the need for accurate sensor node localization and signal broadcasting model estimation becomes critical. Therefore, designing the trajectory of mobile agents is crucial for rapid data collection and information gathering while adhering to vehicle constraints such as dynamics and energy usage. In this thesis, we focus on the application of optimal control methods to design trajectories for mobile agents in data harvesting. This thesis makes contributions in three areas: the creation of a parameterized optimal control policy, the application of a Deep Reinforcement Learning (DRL) based control, and the use of Fisher Information (FI) as a cost matrix in a Receding Horizon Control (RHC) method. Parameterized Optimal Control Policy: Our contributions in this area begin by considering a data harvesting problem in 1-D space. We use a Hamiltonian analysis to show that the optimal control can be described using a parameterized policy and then develop a gradient descent scheme using Infinitesimal Perturbation Analysis (IPA) to calculate the gradients of the cost function with respect to the control parameters. We also consider this problem in a multi-agent setting. To avoid collisions between agents, we apply a Control Barrier Function (CBF) technique to ensure the agents closely track the desired optimal trajectory to complete their mission while avoiding any collisions. Finally, we extend the problem to a mobile sensor scenario. In this more complicated setting we demonstrate that the optimization problem for the control policy parameters can be effectively solved using a heuristic approach. Deep-Reinforcement-Learning based Control: The parametric optimal control approach cannot be easily extended from the 1-D setting to 2-D space. For this reason, we turn to DRL techniques. We utilize Hamiltonian analysis again to get the necessary conditions for optimal control and then translate the problem to a Markov Decision Process (MDP) in discrete time. We apply reinforcement learning techniques, including double deep Q-learning and Proximal Policy Optimization (PPO), to find high-performing solutions across different scenarios. We demonstrate the effectiveness of these methods in 2-D simulations. Fisher-Information-based Receding Horizon Control: For the data harvesting problem in large scale unknown environments, estimating the parameters defining the broadcast model and the location of all the nodes in the environment is critical for efficient extraction of the data. To address that, we start with a Received Signal Strength (RSS) model that relies on a Line-of-Sight (LoS) path-loss model with measurements that are corrupted by Gaussian distributed noise. We first consider a single agent tasked with estimating these unknown parameters in discrete time, and then develop a Fisher Information Matrix (FIM) Receding Horizon (RH) controller for agent motion planning in real time. We also design a Neural Network (NN)-based controller to approximate the optimal solution to the Hamilton-Jacobi-Bellman (HJB) problem, maximizing information gain along a continuous time trajectory. Additionally, a two-stage formation-based RH controller is designed for multi-agent scenarios. The experiments demonstrate that the optimal control policy contribute to the high performance of data collection and the FI-based RHC methods enhance the estimation accuracy in various simulation environments.
|
16 |
Formations and Obstacle Avoidance in Mobile Robot ControlÖgren, Petter January 2003 (has links)
<p>This thesis consists of four independent papers concerningthe control of mobile robots in the context of obstacleavoidance and formation keeping.</p><p>The first paper describes a new theoreticallyv erifiableapproach to obstacle avoidance. It merges the ideas of twoprevious methods, with complementaryprop erties, byusing acombined control Lyapunov function (CLF) and model predictivecontrol (MPC) framework.</p><p>The second paper investigates the problem of moving a fixedformation of vehicles through a partiallykno wn environmentwith obstacles. Using an input to state (ISS) formulation theconcept of configuration space obstacles is generalized toleader follower formations. This generalization then makes itpossible to convert the problem into a standard single vehicleobstacle avoidance problem, such as the one considered in thefirst paper. The properties of goal convergence and safetyth uscarries over to the formation obstacle avoidance case.</p><p>In the third paper, coordination along trajectories of anonhomogenuos set of vehicles is considered. Byusing a controlLyapunov function approach, properties such as boundedformation error and finite completion time is shown.</p><p>Finally, the fourth paper applies a generalized version ofthe control in the third paper to translate,rotate and expanda formation. It is furthermore shown how a partial decouplingof formation keeping and formation mission can be achieved. Theapproach is then applied to a scenario of underwater vehiclesclimbing gradients in search for specific thermal/biologicalregions of interest. The sensor data fusion problem fordifferent formation configurations is investigated and anoptimal formation geometryis proposed.</p><p><b>Keywords:</b>Mobile Robots, Robot Control, ObstacleAvoidance, Multirobot System, Formation Control, NavigationFunction, Lyapunov Function, Model Predictive Control, RecedingHorizon Control, Gradient Climbing, Gradient Estimation.</p>
|
17 |
Model predictive control based on an LQG design for time-varying linearizationsBenner, Peter, Hein, Sabine 11 March 2010 (has links) (PDF)
We consider the solution of nonlinear optimal control problems subject to stochastic perturbations with incomplete observations. In particular, we generalize results obtained by Ito and Kunisch in [8] where they consider a receding horizon control (RHC) technique based on linearizing the problem on small intervals. The linear-quadratic optimal control problem for the resulting time-invariant (LTI) problem is then solved using the linear quadratic Gaussian (LQG) design. Here, we allow linearization about an instationary reference trajectory and thus obtain a linear time-varying (LTV) problem on each time horizon. Additionally, we apply a model predictive control (MPC) scheme which can be seen as a generalization of RHC and we allow covariance matrices of the noise processes not equal to the identity. We illustrate the MPC/LQG approach for a three dimensional reaction-diffusion system. In particular, we discuss the benefits of time-varying linearizations over time-invariant ones.
|
18 |
Formations and Obstacle Avoidance in Mobile Robot ControlÖgren, Petter January 2003 (has links)
This thesis consists of four independent papers concerningthe control of mobile robots in the context of obstacleavoidance and formation keeping. The first paper describes a new theoreticallyv erifiableapproach to obstacle avoidance. It merges the ideas of twoprevious methods, with complementaryprop erties, byusing acombined control Lyapunov function (CLF) and model predictivecontrol (MPC) framework. The second paper investigates the problem of moving a fixedformation of vehicles through a partiallykno wn environmentwith obstacles. Using an input to state (ISS) formulation theconcept of configuration space obstacles is generalized toleader follower formations. This generalization then makes itpossible to convert the problem into a standard single vehicleobstacle avoidance problem, such as the one considered in thefirst paper. The properties of goal convergence and safetyth uscarries over to the formation obstacle avoidance case. In the third paper, coordination along trajectories of anonhomogenuos set of vehicles is considered. Byusing a controlLyapunov function approach, properties such as boundedformation error and finite completion time is shown. Finally, the fourth paper applies a generalized version ofthe control in the third paper to translate,rotate and expanda formation. It is furthermore shown how a partial decouplingof formation keeping and formation mission can be achieved. Theapproach is then applied to a scenario of underwater vehiclesclimbing gradients in search for specific thermal/biologicalregions of interest. The sensor data fusion problem fordifferent formation configurations is investigated and anoptimal formation geometryis proposed. Keywords:Mobile Robots, Robot Control, ObstacleAvoidance, Multirobot System, Formation Control, NavigationFunction, Lyapunov Function, Model Predictive Control, RecedingHorizon Control, Gradient Climbing, Gradient Estimation. / QC 20111121
|
19 |
Robust and stochastic MPC of uncertain-parameter systemsFleming, James January 2016 (has links)
Constraint handling is difficult in model predictive control (MPC) of linear differential inclusions (LDIs) and linear parameter varying (LPV) systems. The designer is faced with a choice of using conservative bounds that may give poor performance, or accurate ones that require heavy online computation. This thesis presents a framework to achieve a more flexible trade-off between these two extremes by using a state tube, a sequence of parametrised polyhedra that is guaranteed to contain the future state. To define controllers using a tube, one must ensure that the polyhedra are a sub-set of the region defined by constraints. Necessary and sufficient conditions for these subset relations follow from duality theory, and it is possible to apply these conditions to constrain predicted system states and inputs with only a little conservatism. This leads to a general method of MPC design for uncertain-parameter systems. The resulting controllers have strong theoretical properties, can be implemented using standard algorithms and outperform existing techniques. Crucially, the online optimisation used in the controller is a convex problem with a number of constraints and variables that increases only linearly with the length of the prediction horizon. This holds true for both LDI and LPV systems. For the latter it is possible to optimise over a class of gain-scheduled control policies to improve performance, with a similar linear increase in problem size. The framework extends to stochastic LDIs with chance constraints, for which there are efficient suboptimal methods using online sampling. Sample approximations of chance constraint-admissible sets are generally not positively invariant, which motivates the novel concept of âsample-admissible' sets with this property to ensure recursive feasibility when using sampling methods. The thesis concludes by introducing a simple, convex alternative to chance-constrained MPC that applies a robust bound to the time average of constraint violations in closed-loop.
|
20 |
Model predictive control based on an LQG design for time-varying linearizationsBenner, Peter, Hein, Sabine 11 March 2010 (has links)
We consider the solution of nonlinear optimal control problems subject to stochastic perturbations with incomplete observations. In particular, we generalize results obtained by Ito and Kunisch in [8] where they consider a receding horizon control (RHC) technique based on linearizing the problem on small intervals. The linear-quadratic optimal control problem for the resulting time-invariant (LTI) problem is then solved using the linear quadratic Gaussian (LQG) design. Here, we allow linearization about an instationary reference trajectory and thus obtain a linear time-varying (LTV) problem on each time horizon. Additionally, we apply a model predictive control (MPC) scheme which can be seen as a generalization of RHC and we allow covariance matrices of the noise processes not equal to the identity. We illustrate the MPC/LQG approach for a three dimensional reaction-diffusion system. In particular, we discuss the benefits of time-varying linearizations over time-invariant ones.
|
Page generated in 0.2925 seconds