• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 35
  • 8
  • 8
  • 6
  • 4
  • 2
  • 1
  • 1
  • 1
  • 1
  • Tagged with
  • 88
  • 88
  • 25
  • 18
  • 15
  • 13
  • 11
  • 11
  • 11
  • 10
  • 9
  • 9
  • 9
  • 7
  • 7
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
41

Three Essays Regarding the Economics of Resources with Spatial-Dynamic Transition Processes

Goodenberger, James Stevenson 21 December 2016 (has links)
No description available.
42

Deposit facilities and consumption smoothing: a dynamic stochastic model of precautionary wealth choices for a credit-constrained rural household

Gomez-Soto, Franz M. 16 July 2007 (has links)
No description available.
43

Bounding Reachable Sets for Global Dynamic Optimization

Cao, Huiyi January 2021 (has links)
Many chemical engineering applications, such as safety verification and parameter estimation, require global optimization of dynamic models. Global optimization algorithms typically require obtaining global bounding information of the dynamic system, to aid in locating and verifying the global optimum. The typical approach for providing these bounds is to generate convex relaxations of the dynamic system and minimize them using a local optimization solver. Tighter convex relaxations typically lead to tighter lower bounds, so that the number of iterations in global optimization algorithms can be reduced. To carry out this local optimization efficiently, subgradient-based solvers require gradients or subgradients to be furnished. Smooth convex relaxations would aid local optimization even more. To address these issues and improve the computational performance of global dynamic optimization, this thesis proposes several novel formulations for constructing tight convex relaxations of dynamic systems. In some cases, these relaxations are smooth. Firstly, a new strategy is developed to generate convex relaxations of implicit functions, under minimal assumptions. These convex relaxations are described by parametric programs whose constraints are convex relaxations of the residual function. Compared with established methods for relaxing implicit functions, this new approach does not assume uniqueness of the implicit function and does not require the original residual function to be factorable. This new strategy was demonstrated to construct tighter convex relaxations in multiple numerical examples. Moreover, this new convex relaxation strategy extends to inverse functions, feasible-set mappings in constraint satisfaction problems, as well as parametric ordinary differential equations (ODEs). Using a proof-of-concept implementation in Julia, numerical examples are presented to illustrate the convex relaxations produced for various implicit functions and optimal-value functions. In certain cases, these convex relaxations are tighter than those generated with existing methods. Secondly, a novel optimization-based framework is introduced for computing time-varying interval bounds for ODEs. Such interval bounds are useful for constructing convex relaxations of ODEs, and tighter interval bounds typically translate into tighter convex relaxations. This framework includes several established bounding approaches, but also includes many new approaches. Some of these new methods can generate tighter interval bounds than established methods, which are potentially helpful for constructing tighter convex relaxations of ODEs. Several of these approaches have been implemented in Julia. Thirdly, a new approach is developed to improve a state-of-the-art ODE relaxation method and generate tighter and smooth convex relaxations. Unlike state-of-the-art methods, the auxiliary ODEs used in these new methods for computing convex relaxations have continuous right-hand side functions. Such continuity not only makes the new methods easier to implement, but also permits the evaluation of the subgradients of convex relaxations. Under some additional assumptions, differentiable convex relaxations can be constructed. Besides that, it is demonstrated that the new convex relaxations are at least as tight as state-of-the-art methods, which benefits global dynamic optimization. This approach has been implemented in Julia, and numerical examples are presented. Lastly, a new approach is proposed for generating a guaranteed lower bound for the optimal solution value of a nonconvex optimal control problem (OCP). This lower bound is obtained by constructing a relaxed convex OCP that satisfies the sufficient optimality conditions of Pontryagin's Minimum Principle. Such lower bounding information is useful for optimizing the original nonconvex OCP to a global minimum using deterministic global optimization algorithms. Compared with established methods for underestimating nonconvex OCPs, this new approach constructs tighter lower bounds. Moreover, since it does not involve any numerical approximation of the control and state trajectories, it provides lower bounds that are reliable and consistent. This approach has been implemented for control-affine systems, and numerical examples are presented. / Thesis / Doctor of Philosophy (PhD)
44

Impact of Transaction costs on dynamic portfolio optimizations : A comparison of active and passive investing in the realm of the Swedish stock market

Georgiev, Toma, Kurmakhadov, Harbi January 2022 (has links)
A growing number of studies have been conducted in the sphere of portfolio analysis concerning different approaches for analyzing stocks and outperforming the market. Pioneers in the sphere of portfolio theory like William Sharpe and Harry Markowitz have developed strategies and ratios for portfolio analysis that could generate positive risk-adjusted returns. Thus, this paper will solicit a number of these strategies to endeavor and generate a return that is higher than the market index while considering the expenses that come with buying and selling stocks (transaction costs). Therefore, the purpose of this study is to assess how active investing measures up to passive investing in the sphere of the Swedish stock market. The roadmap to achieve the desired goals set by the authors is to create numerous portfolios on a weekly basis with securities present in the Swedish OMX30 index using the Maximum Sharpe, Maximum M2, Minimum Variance, and Equally Weighted optimizations. Then the significance of the transaction costs will be tested and a comparison with the market index will be made. The results suggest that in the realm of the Swedish stock market, investing in dynamically optimized portfolios based on the maximization of Sharpe Ratio and M2 will generate higher returns in comparison to passively investing in the market index, and the significance of transaction costs varies upon the amount of capital invested in the portfolios.
45

Selecting the best control methodology to improve the efficiency of discontinuous reactors

Pahija, E., Manenti, F., Mujtaba, Iqbal M. January 2013 (has links)
No / This work investigates in detail several methodologies to improve the optimal control of discontinuous processes. It shows that whenever a batch dynamic optimization is solved, the optimum is related to the control methodology adopted and the result is a sub-optimum since other more (or apparently less!) appealing control methodologies might lead to "better" optimal solutions. The selection of the best control methodology for the dynamic optimization is broached for batch reactors using gPROMS models builder 3.5.2 for dynamic modeling and BzzMath 6.0 optimizers to handle control and optimization issues.
46

Optimal Startup of Cryogenic Air Separation units: Modeling, Simulation, Optimization, and Control

Quarshie, Anthony Worlanyo Kwaku January 2023 (has links)
Cryogenic air separation units (ASUs) are the most widely used technology for industrialscale production of large amounts of high-purity air components. These are highly energyintensive processes, which have motivated the development of demand response strategies to adapt their operation in response to the increased volatility of the energy market. The startup of ASUs warrants particular consideration within this context. ASUs are tightly integrated, thermally and materially, and have slow dynamics. These result in startup times on the order of hours to a day, during which electricity is consumed with limited revenue generation. In the current environment of electricity price deregulation, it may be economically advantageous for ASUs to shut down during periods of high electricity pricing, increasing the occurrences of startups. This presents a promising research opportunity, especially because ASU startup has received relatively little attention in the literature. This thesis investigates the optimal startup of ASUs using dynamic optimization. First, this thesis focuses on startup model development for the multiproduct ASU. Startup model development requires accounting for discontinuities present at startup. Four main discontinuities are considered: stage liquid flow discontinuity, stage vapor flow discontinuities, flow liquid out of sumps and reboilers, and opening and closing valves. Other types of discontinuities accounted for include the change in the number of phases of streams. These discontinuities are approximated with smoothing formulations, using mostly hyperbolic tangent functions, to allow application of gradient-based optimization. The modeling approach was assessed through three case studies: dynamic simulation of a successful startup, dynamic simulation of a failed startup, and dynamic optimization using a least-squares minimization formulation. Following startup model development, this thesis investigates the development of a framework for optimizing ASU startups using readily interpretable metrics of time and economics. For economics, cumulative profit over the startup horizon is considered. Two events are tracked for the definition of time metrics: time taken to obtain product purities and time to obtain steady-state product flows. Novel approaches are proposed for quantifying these time metrics, which are used as objective functions and in formulating constraints. The / Thesis / Doctor of Philosophy (PhD)
47

Improvement of multicomponent batch reactive distillation under parameter uncertainty by inferential state with model predictive control

Weerachaipichasgul, W., Kittisupakorn, P., Mujtaba, Iqbal M. January 2013 (has links)
yes / Batch reactive distillation is aimed at achieving a high purity product, therefore, there is a great deal to find an optimal operating condition and effective control strategy to obtain maximum of the high purity product. An off-line dynamic optimization is first performed with an objective function to provide optimal product composition for the batch reactive distillation: maximum productivity. An inferential state estimator (an extended Kalman filter, EKF) based on simplified mathematical models and on-line temperature measurements, is incorporated to estimate the compositions in the reflux drum and the reboiler. Model Predictive Control (MPC) has been implemented to provide tracking of the desired product compositions subject to simplified model equations. Simulation results demonstrate that the inferential state estimation can provide good estimates of compositions. Therefore, the control performance of the MPC with the inferential state is better than that of PID. In addition, in the presence of unknown/uncertain parameters (forward reaction rate constant), the estimator is still able to provide accurate concentrations. As a result, the MPC with the inferential state is still robust and applicable in real plants.
48

An Approach to Real Time Adaptive Decision Making in Dynamic Distributed Systems

Adams, Kevin Page 20 January 2006 (has links)
Efficient operation of a dynamic system requires (near) optimal real-time control decisions. Those decisions depend on a set of control parameters that change over time. Very often, the optimal decision can be made only with the knowledge of future values of control parameters. As a consequence, the decision process is heuristic in nature. The optimal decision can be determined only after the fact, once the uncertainty is removed. For some types of dynamic systems, the heuristic approach can be very effective. The basic premise is that the future values of control parameters can be predicted with sufficient accuracy. We can either predict those value based on a good model of the system or based on historical data. In many cases, the good model is not available. In that case, prediction using historical data is the only option. It is necessary to detect similarities with the current situation and extrapolate future values. In other words, we need to (quickly) identify patterns in historical data that match the current data pattern. The low sensitivity of the optimal solution is critical. Small variations in data patterns should affect minimally the optimal solution. Resource allocation problems and other "discrete decision systems" are good examples of such systems. The main contribution of this work is a novel heuristic methodology that uses neural networks for classifying, learning and detecting changing patterns, as well as making (near) real-time decisions. We improve on existing approaches by providing a real-time adaptive approach that takes into account changes in system behavior with minimal operational delay without the need for an accurate model. The methodology is validated by extensive simulation and practical measurements. Two metrics are proposed to quantify the quality of control decisions as well as a comparison to the optimal solution. / Ph. D.
49

Modèles et méthodes actuarielles pour l'évaluation quantitative des risques en environnement solvabilité II / Actuarial models and methods for quantitative risk analysis

Ben Dbabis, Makram 14 December 2012 (has links)
Les nouvelles normes prudentielles Solvabilité II se penchent sur question du contrôle de la solvabilité des acteurs de marché d’assurance et de réassurance. Nous nous sommes proposé dans cette thèse de présenter les moyens techniques permettant la recherche des besoins de couverture de la solvabilité des assureurs se traduisant par la mise en œuvre d’une probabilité de ruine à moins de 0,5%, dans le cadre des modèles internes. La première partie, en mettant l’accent sur le problème de valorisation économique des passifs d’assurance vie lié aux options incluses dans les contrats d’assurance et donc d’obtention de la distribution de la situation nette à un 1 an et donc de son quantile d’ordre 0.5%, présentera les différentes approches de modélisation permettant de contourner ces problèmes :– Approche des simulations dans les simulations purement simulatoire et trop coûteuse en temps de calcul,– Algorithme d’accélération des simulations dans les simulations pour contourner les limites de la première approche,– Approche par portefeuille répliquant– Approche par fonction de perteDans une deuxième partie, l’accent sera mis sur la modélisation des risques techniques mal appréhendés par les assureurs en développant deux approches stochastiques pour modéliser, dans le cadre d’un modèle interne, les risques de longévité, de mortalité et aussi le risque dépendance. La troisième partie intéressera à l’optimisation du capital économique en mettant en œuvre la réassurance comme outil de gain en capital économique en proposant des approches de choix optimal en réassurance / The new prudential standards, Solvency II, consider the question of controling of insurer and reinsurer’s solvency. In this thesis, we’ve proposed technical solution for solvency capital assessment to keep ruin’s probability under the target of 0.5% aimed by the Solvency II project in internal model prospect. The First part will discuss the problem of economic valorization of life insurance liabilities and will present di_erent modeling approaches to determine the net assets value distribution and assess the 0.5% percentile that can solve it :– Nested simulation approach which is too much time consumer,– Nested simulation accelerator,– Replication portfolio approach,– Loss function approach.In the second part, we will focus on biometric risks modeling. Two stochastic modeling approaches was developped in order to model mortality & longevity and morbidity risks. The third part will focuss on capital optimization using reinsurance as a tool of capital reduction
50

Conception de métaheuristiques pour l'optimisation dynamique : application à l'analyse de séquences d'images IRM / Design of metaheuristics for dynamic optimization : application to the analysis of MRI image sequences

Lepagnot, Julien 01 December 2011 (has links)
Dans la pratique, beaucoup de problèmes d'optimisation sont dynamiques : leur fonction objectif (ou fonction de coût) évolue au cours du temps. L'approche principalement adoptée dans la littérature consiste à adapter des algorithmes d'optimisation statique à l'optimisation dynamique, en compensant leurs défauts intrinsèques. Plutôt que d'emprunter cette voie, déjà largement explorée, l'objectif principal de cette thèse est d'élaborer un algorithme entièrement pensé pour l'optimisation dynamique. La première partie de cette thèse est ainsi consacrée à la mise au point d'un algorithme, qui doit non seulement se démarquer des algorithmes concurrents par son originalité, mais également être plus performant. Dans ce contexte, il s'agit de développer une métaheuristique d'optimisation dynamique. Deux algorithmes à base d'agents, MADO (MultiAgent algorithm for Dynamic Optimization) et MLSDO (Multiple Local Search algorithm for Dynamic Optimization), sont proposés et validés sur les deux principaux jeux de tests existant dans la littérature en optimisation dynamique : MPB (Moving Peaks Benchmark) et GDBG (Generalized Dynamic Benchmark Generator). Les résultats obtenus sur ces jeux de tests montrent l'efficacité des stratégies mises en oeuvre par ces algorithmes, en particulier : MLSDO est classé premier sur sept algorithmes évalués sur GDBG, et deuxième sur seize algorithmes évalués sur MPB. Ensuite, ces algorithmes sont appliqués à des problèmes pratiques en traitement de séquences d'images médicales (segmentation et recalage de séquences ciné-IRM cérébrales). A notre connaissance, ce travail est innovant, en ce sens que l'approche de l'optimisation dynamique n'avait jamais été explorée pour ces problèmes. Les gains de performance obtenus montrent l'intérêt d'utiliser les algorithmes d'optimisation dynamique proposés pour ce type d'applications / Many real-world problems are dynamic, i.e. their objective function (or cost function) changes over time. The main approach used in the literature is to adapt static optimization algorithms to dynamic optimization, compensating for their intrinsic defects. Rather than adopting this approach, already widely investigated, the main goal of this thesis is to develop an algorithm completely designed for dynamic optimization. The first part of this thesis is then devoted to the design of an algorithm, that should not only stand out from competing algorithms for its originality, but also perform better. In this context, our goal is to develop a dynamic optimization metaheuristic. Two agent-based algorithms, MADO (MultiAgent algorithm for Dynamic Optimization) and MLSDO (Multiple Local Search algorithm for Dynamic Optimization), are proposed and validated using the two main benchmarks available in dynamic environments : MPB (Moving Peaks Benchmark) and GDBG (Generalized Dynamic Benchmark Generator). The benchmark results obtained show the efficiency of the proposed algorithms, particularly : MLSDO is ranked at the first place among seven algorithms tested using GDBG, and at the second place among sixteen algorithms tested using MPB. Then, these algorithms are applied to real-world problems in medical image sequence processing (segmentation and registration of brain cine-MRI sequences). To our knowledge, this work is innovative in that the dynamic optimization approach had never been investigated for these problems. The performance gains obtained show the relevance of using the proposed dynamic optimization algorithms for this kind of applications

Page generated in 0.1224 seconds