21 |
Stochastic programming models and methods for portfolio optimization and risk managementMeskarian, Rudabeh January 2012 (has links)
This project is focused on stochastic models and methods and their application in portfolio optimization and risk management. In particular it involves development and analysis of novel numerical methods for solving these types of problem. First, we study new numerical methods for a general second order stochastic dominance model where the underlying functions are not necessarily linear. Specifically, we penalize the second order stochastic dominance constraints to the objective under Slater’s constraint qualification and then apply the well known stochastic approximation method and the level function methods to solve the penalized problem and present the corresponding convergence analysis. All methods are applied to some portfolio optimization problems, where the underlying functions are not necessarily linear all results suggests that the portfolio strategy generated by the second order stochastic dominance model outperform the strategy generated by the Markowitz model in a sense of having higher return and lower risk. Furthermore a nonlinear supply chain problem is considered, where the performance of the level function method is compared to the cutting plane method. The results suggests that the level function method is more efficient in a sense of having lower CPU time as well as being less sensitive to the problem size. This is followed by study of multivariate stochastic dominance constraints. We propose a penalization scheme for the multivariate stochastic dominance constraint and present the analysis regarding the Slater constraint qualification. The penalized problem is solved by the level function methods and a modified cutting plane method and compared to the cutting surface method proposed in [70] and the linearized method proposed in [4]. The convergence analysis regarding the proposed algorithms are presented. The proposed numerical schemes are applied to a generic budget allocation problem where it is shown that the proposed methods outperform the linearized method when the problem size is big. Moreover, a portfolio optimization problem is considered where it is shown that the a portfolio strategy generated by the multivariate second order stochastic dominance model outperform the portfolio strategy generated by the Markowitz model in sense of having higher return and lower risk. Also the performance of the algorithms is investigated with respect to the computation time and the problem size. It is shown that the level function method and the cutting plane method outperform the cutting surface method in a sense of both having lower CPU time as well as being less sensitive to the problem size. Finally, reward-risk analysis is studied as an alternative to stochastic dominance. Specifically, we study robust reward-risk ratio optimization. We propose two robust formulations, one based on mixture distribution, and the other based on the first order moment approach. We propose a sample average approximation formulation as well as a penalty scheme for the two robust formulations respectively and solve the latter with the level function method. The convergence analysis are presented and the proposed models are applied to Sortino ratio and some numerical test results are presented. The numerical results suggests that the robust formulation based on the first order moment results in the most conservative portfolio strategy compared to the mixture distribution model and the nominal model.
|
22 |
Optimisation and Bayesian optimalityJoyce, Thomas January 2016 (has links)
This doctoral thesis will present the results of work into optimisation algorithms. We first give a detailed exploration of the problems involved in comparing optimisation algorithms. In particular we provide extensions and refinements to no free lunch results, exploring algorithms with arbitrary stopping conditions, optimisation under restricted metrics, parallel computing and free lunches, and head-to-head minimax behaviour. We also characterise no free lunch results in terms of order statistics. We then ask what really constitutes understanding of an optimisation algorithm. We argue that one central part of understanding an optimiser is knowing its Bayesian prior and cost function. We then pursue a general Bayesian framing of optimisation, and prove that this Bayesian perspective is applicable to all optimisers, and that even seemingly non-Bayesian optimisers can be understood in this way. Specifically we prove that arbitrary optimisation algorithms can be represented as a prior and a cost function. We examine the relationship between the Kolmogorov complexity of the optimiser and the Kolmogorov complexity of it’s corresponding prior. We also extended our results from deterministic optimisers to stochastic optimisers and forgetful optimisers, and we show that uniform randomly selecting a prior is not equivalent to uniform randomly selecting an optimisation behaviour. Lastly we consider what the best way to go about gaining a Bayesian understanding of real optimisation algorithms is. We use the developed Bayesian framework to explore the affects of some common approaches to constructing meta-heuristic optimisation algorithms, such as on-line parameter adaptation. We conclude by exploring an approach to uncovering the probabilistic beliefs of optimisers with a “shattering” method.
|
23 |
Stochastic joint replenishment problems : periodic review policiesAlrasheedi, Adel Fahad January 2015 (has links)
Operations Managers of manufacturing systems, distribution systems, and supply chains address lot sizing and scheduling problems as part of their duties. These problems are concerned with decisions related to the size of orders and their schedule. In general, products share or compete for common resources and thus require coordination of their replenishment decisions whether replenishment involves manufacturing operations or not. This research is concerned with joint replenishment problems (JRPs) which are part of multi-item lot sizing and scheduling problems in manufacturing and distribution systems in single echelon/stage systems. The principal purpose of this research is to develop three new periodic review policies for stochastic joint replenishment problem. It also highlights the lack of research on joint replenishment problems with different demand classes (DSJRP). Therefore, periodic review policy is developed for this problem where the inventory system faces different demand classes that are deterministic demand and stochastic demand. Heuristic Algorithms have been developed to obtain (near) optimal parameters for the three policies as well as a heuristic algorithm has been developed for DSJRP. Numerical tests against literature benchmarks have been presented.
|
24 |
Parallel solution of linear programsSmith, Edmund January 2013 (has links)
The factors limiting the performance of computer software periodically undergo sudden shifts, resulting from technological progress, and these shifts can have profound implications for the design of high performance codes. At the present time, the speed with which hardware can execute a single stream of instructions has reached a plateau. It is now the number of instruction streams that may be executed concurrently which underpins estimates of compute power, and with this change, a critical limitation on the performance of software has come to be the degree to which it can be parallelised. The research in this thesis is concerned with the means by which codes for linear programming may be adapted to this new hardware. For the most part, it is codes implementing the simplex method which will be discussed, though these have typically lower performance for single solves than those implementing interior point methods. However, the ability of the simplex method to rapidly re-solve a problem makes it at present indispensable as a subroutine for mixed integer programming. The long history of the simplex method as a practical technique, with applications in many industries and government, has led to such codes reaching a great level of sophistication. It would be unexpected in a research project such as this one to match the performance of top commercial codes with many years of development behind them. The simplex codes described in this thesis are, however, able to solve real problems of small to moderate size, rather than being confined to random or otherwise artificially generated instances. The remainder of this thesis is structured as follows. The rest of this chapter gives a brief overview of the essential elements of modern parallel hardware and of the linear programming problem. Both the simplex method and interior point methods are discussed, along with some of the key algorithmic enhancements required for such systems to solve real-world problems. Some background on the parallelisation of both types of code is given. The next chapter describes two standard simplex codes designed to exploit the current generation of hardware. i6 is a parallel standard simplex solver capable of being applied to a range of real problems, and showing exceptional performance for dense, square programs. i8 is also a parallel, standard simplex solver, but now implemented for graphics processing units (GPUs).
|
25 |
Portfolio optimisation modelsArbex Valle, Cristiano January 2013 (has links)
In this thesis we consider three different problems in the domain of portfolio optimisation. The first problem we consider is that of selecting an Absolute Return Portfolio (ARP). ARPs are usually seen as financial portfolios that aim to produce a good return regardless of how the underlying market performs, but our literature review shows that there is little agreement on what constitutes an ARP. We present a clear definition via a three-stage mixed-integer zero-one program for the problem of selecting an ARP. The second problem considered is that of designing a Market Neutral Portfolio (MNP). MNPs are generally defined as financial portfolios that (ideally)exhibit performance independent from that of an underlying market, but, once again, the existing literature is very fragmented. We consider the problem of constructing a MNP as a mixed-integer non-linear program (MINLP) which minimises the absolute value of the correlation between portfolio return and underlying benchmark return. The third problem is related to Exchange-Traded Funds (ETFs). ETFs are funds traded on the open market which typically have their performance tied to a benchmark index. They are composed of a basket of assets; most attempt to reproduce the returns of an index, but a growing number try to achieve a multiple of the benchmark return, such as two times or the negative of the return. We present a detailed performance study of the current ETF market and we find, among other conclusions, constant underperformance among ETFs that aim to do more than simply track an index. We present a MINLP for the problem of selecting the basket of assets that compose an ETF, which, to the best of our knowledge, is the first in the literature. For all three models we present extensive computational results for portfolios derived from universes defined by S&P international equity indices with up to 1200 stocks. We use CPLEX to solve the ARP problem and the software package Minotaur for both our MINLPs for MNP and an ETF.
|
26 |
Optimisation of definition structures & parameter values in process algebra models using evolutionary computationOaken, David R. January 2014 (has links)
Process Algebras are a Formal Modelling methodology which are an effective tool for defining models of complex systems, particularly those involving multiple interacting processes. However, describing such a model using Process Algebras requires expertise from both the modeller and the domain expert. Finding the correct model to describe a system can be difficult. Further more, even with the correct model, parameter tuning to allow model outputs to match experimental data can also be both difficult and time consuming. Evolutionary Algorithms provide effective methods for finding solutions to optimisation problems with large and noisy search spaces. Evolutionary Algorithms have been proven to be well suited to investigating parameter fitting problems in order to match known data or desired behaviour. It is proposed that Process Algebras and Evolutionary Algorithms have complementary strengths for developing models of complex systems. Evolutionary Algorithms require a precise and accurate fitness function to score and rank solutions. Process Algebras can be incorporated into the fitness function to provide this mathematical score. Presented in this work is the Evolving Process Algebra (EPA) framework, designed for the application of Evolutionary Algorithms (specifically Genetic Algorithms and Genetic Programming optimisation techniques) to models described in Process Algebra (specifically PEPA and Bio-PEPA) with the aim of evolving fitter models. The EPA framework is demonstrated using multiple complex systems. For PEPA this includes the dining philosophers resource allocation problem, the repressilator genetic circuit, the G-protein cellular signal regulators and two epidemiological problems: HIV and the measles virus. For Bio-PEPA the problems include a biochemical reactant-product system, a generic genetic network, a variant of the G-protein system and three epidemiological problems derived from the measles virus. Also presented is the EPA Utility Assistant program; a lightweight graphical user interface. This is designed to open the full functionality and parallelisation of the EPA framework to beginner or naive users. In addition, the assistant program aids in collating and graphing after experiments are completed.
|
27 |
Active-set prediction for interior point methodsYan, Yiming January 2015 (has links)
This research studies how to efficiently predict optimal active constraints of an inequality constrained optimization problem, in the context of Interior Point Methods (IPMs). We propose a framework based on shifting/perturbing the inequality constraints of the problem. Despite being a class of powerful tools for solving Linear Programming (LP) problems, IPMs are well-known to encounter difficulties with active-set prediction due essentially to their construction. When applied to an inequality constrained optimization problem, IPMs generate iterates that belong to the interior of the set determined by the constraints, thus avoiding/ignoring the combinatorial aspect of the solution. This comes at the cost of difficulty in predicting the optimal active constraints that would enable termination, as well as increasing ill-conditioning of the solution process. We show that, existing techniques for active-set prediction, however, suffer from difficulties in making an accurate prediction at the early stage of the iterative process of IPMs; when these techniques are ready to yield an accurate prediction towards the end of a run, as the iterates approach the solution set, the IPMs have to solve increasingly ill-conditioned and hence difficult, subproblems. To address this challenging question, we propose the use of controlled perturbations. Namely, in the context of LP problems, we consider perturbing the inequality constraints (by a small amount) so as to enlarge the feasible set. We show that if the perturbations are chosen judiciously, the solution of the original problem lies on or close to the central path of the perturbed problem. We solve the resulting perturbed problem(s) using a path-following IPM while predicting on the way the active set of the original LP problem; we find that our approach is able to accurately predict the optimal active set of the original problem before the duality gap for the perturbed problem gets too small. Furthermore, depending on problem conditioning, this prediction can happen sooner than predicting the active set for the perturbed problem or for the original one if no perturbations are used. Proof-of-concept algorithms are presented and encouraging preliminary numerical experience is also reported when comparing activity prediction for the perturbed and unperturbed problem formulations. We also extend the idea of using controlled perturbations to enhance the capabilities of optimal active-set prediction for IPMs for convex Quadratic Programming (QP) problems. QP problems share many properties of LP, and based on these properties, some results require more care; furthermore, encouraging preliminary numerical experience is also presented for the QP case.
|
28 |
Parallel problem generation for structured problems in mathematical programmingQiang, Feng January 2015 (has links)
The aim of this research is to investigate parallel problem generation for structured optimization problems. The result of this research has produced a novel parallel model generator tool, namely the Parallel Structured Model Generator (PSMG). PSMG adopts the model syntax from SML to attain backward compatibility for the models already written in SML [1]. Unlike the proof-of-concept implementation for SML in [2], PSMG does not depend on AMPL [3]. In this thesis, we firstly explain what a structured problem is using concrete real-world problems modelled in SML. Presenting those example models allows us to exhibit PSMG’s modelling syntax and techniques in detail. PSMG provides an easy to use framework for modelling large scale nested structured problems including multi-stage stochastic problems. PSMG can be used for modelling linear programming (LP), quadratic programming (QP), and nonlinear programming (NLP) problems. The second part of this thesis describes considerable thoughts on logical calling sequence and dependencies in parallel operation and algorithms in PSMG. We explain the design concept for PSMG’s solver interface. The interface follows a solver driven work assignment approach that allows the solver to decide how to distribute problem parts to processors in order to obtain better data locality and load balancing for solving problems in parallel. PSMG adopts a delayed constraint expansion design. This allows the memory allocation for computed entities to only happen on a process when it is necessary. The computed entities can be the set expansions of the indexing expressions associated with the variable, parameter and constraint declarations, or temporary values used for set and parameter constructions. We also illustrate algorithms that are important for delivering efficient implementation of PSMG, such as routines for partitioning constraints according to blocks and automatic differentiation algorithms for evaluating Jacobian and Hessian matrices and their corresponding sparsity partterns. Furthermore, PSMG implements a generic solver interface which can be linked with different structure exploiting optimization solvers such as decomposition or interior point based solvers. The work required for linking with PSMG’s solver interface is also discussed. Finally, we evaluate PSMG’s run-time performance and memory usage by generating structured problems with various sizes. The results from both serial and parallel executions are discussed. The benchmark results show that PSMG achieve good parallel efficiency on up to 96 processes. PSMG distributes memory usage among parallel processors which enables the generation of problems that are too large to be processed on a single node due to memory restriction.
|
29 |
La programmation DC et DCA pour certaines classes de problèmes en apprentissage et fouille de donées [i.e. données] / DC programming and DCA for some classes of problems in machine learning and data miningNguyen, Manh Cuong 19 May 2014 (has links)
La classification (supervisée, non supervisée et semi-supervisée) est une thématique importante de la fouille de données. Dans cette thèse, nous nous concentrons sur le développement d'approches d'optimisation pour résoudre certains types des problèmes issus de la classification de données. Premièrement, nous avons examiné et développé des algorithmes pour résoudre deux problèmes classiques en apprentissage non supervisée : la maximisation du critère de modularité pour la détection de communautés dans des réseaux complexes et les cartes auto-organisatrices. Deuxièmement, pour l'apprentissage semi-supervisée, nous proposons des algorithmes efficaces pour le problème de sélection de variables en semi-supervisée Machines à vecteurs de support. Finalement, dans la dernière partie de la thèse, nous considérons le problème de sélection de variables en Machines à vecteurs de support multi-classes. Tous ces problèmes d'optimisation sont non convexe de très grande dimension en pratique. Les méthodes que nous proposons sont basées sur les programmations DC (Difference of Convex functions) et DCA (DC Algorithms) étant reconnues comme des outils puissants d'optimisation. Les problèmes évoqués ont été reformulés comme des problèmes DC, afin de les résoudre par DCA. En outre, compte tenu de la structure des problèmes considérés, nous proposons différentes décompositions DC ainsi que différentes stratégies d'initialisation pour résoudre un même problème. Tous les algorithmes proposés ont été testés sur des jeux de données réelles en biologie, réseaux sociaux et sécurité informatique / Classification (supervised, unsupervised and semi-supervised) is one of important research topics of data mining which has many applications in various fields. In this thesis, we focus on developing optimization approaches for solving some classes of optimization problems in data classification. Firstly, for unsupervised learning, we considered and developed the algorithms for two well-known problems: the modularity maximization for community detection in complex networks and the data visualization problem with Self-Organizing Maps. Secondly, for semi-supervised learning, we investigated the effective algorithms to solve the feature selection problem in semi-supervised Support Vector Machine. Finally, for supervised learning, we are interested in the feature selection problem in multi-class Support Vector Machine. All of these problems are large-scale non-convex optimization problems. Our methods are based on DC Programming and DCA which are well-known as powerful tools in optimization. The considered problems were reformulated as the DC programs and then the DCA was used to obtain the solution. Also, taking into account the structure of considered problems, we can provide appropriate DC decompositions and the relevant choice strategy of initial points for DCA in order to improve its efficiency. All these proposed algorithms have been tested on the real-world datasets including biology, social networks and computer security
|
30 |
La programmation DC et DCA pour certaines classes de problèmes dans les systèmes de communication sans fil / DC programming and DCA for some classes of problems in Wireless Communication SystemsTran, Thi Thuy 24 April 2017 (has links)
La communication sans fil joue un rôle de plus en plus important dans de nombreux domaines. Un grand nombre d'applications sont exploitées tels que l'e-banking, l'e-commerce, les services médicaux,….Ainsi, la qualité de service (QoS), et la confidentialité d'information sur le réseau sans fil sont primordiales dans la conception du réseau sans fil. Dans le cadre de cette thèse, nous nous concentrons sur le développement des approches d'optimisation pour résoudre certains problèmes concernant les deux sujets suivants : la qualité de service et la sécurité de la couche physique. Nos méthodes sont basées sur la programmation DC (Difference of convex functions) et DCA (DC Algorithms) qui sont reconnues comme de puissants outils d'optimisation non convexes et non différentiables. Ces outils ont connu de grands succès au cours des deux dernières décennies dans la modélisation et la résolution de nombreux problèmes d'applications dans divers domaines de sciences appliquées. Outre les chapitres d'introduction et de conclusion, le contenu principal de cette thèse est divisé en quatre chapitres: Le chapitre 2 concerne la QoS dans les réseaux sans fil tandis que les trois chapitres suivants étudient la sécurité de la couche physique. Le chapitre 2 considère un critère de QoS qui consiste à assurer un service équitable entre les utilisateurs dans un réseau sans fil. Plus précisement, on doit s'assurer qu'aucun utilisateur ne souffre d'un mauvais rapport signal sur bruit (“signal to noise ratio (SNR)" en anglais). Le problème revient à maximiser le plus petit SNR. Il s'agit donc un problème d'optimisation DC général (minimisation d'une fonction DC sur un ensemble défini par des contraintes convexes et des contraintes DC). La programmation DC et DCA ont été développés pour résoudre ce problème. Tenant compte de la structure spécifique du problème, nous avons proposé une nouvelle décomposition DC qui était plus efficace que la précédente décomposition. Une méthode de résolution basée sur la programmation DC et DCA a été développée. De plus, nous avons prouvé la convergence de notre algorithme. L'objectif commun des trois chapitres suivants (Chapitre 3, 4, 5) est de garantir la sécurité de la couche physique d'un système de communication sans fil. Nous nous concentrons sur l'approche qui consiste à maximiser le taux de secret (“secrecy rate” en anglais). Trois diverses architectures du réseau sans fil utilisant différentes techniques coopératives pour la transmission sont considérées dans ces trois chapitres. Dans le chapitre 3, nous considérons un réseau point-à-point utilisant une technique coopérative de brouillage. Le chapitre 4 étudie un réseau de relais utilisant une combinaison de technique de formation de faisceau ("beamforming technique" en anglais) et de technique de relais coopératifs. Deux protocoles de technique de relais coopératifs, Amplifier-et-Transmettre (“Amplify-and-Forward (AF)'') et Décoder-et-Transmettre (“Decode-and-Forward (DF)'' en anglais), sont considérés. Dans le chapitre 3 et le chapitre 4, nous considérons qu'il y a seulement un espion (“eavesdropper" en anglais) dans le réseau tandis que le chapitre 5 est une extension du chapitre 4 où on peut avoir plusieurs espions. Tous ces problèmes sont des problèmes d'optimisation non-convexes qui peuvent être ensuite reformulés sous forme d'une programmation DC pour lesquels nous développons les méthodes efficaces et robustes basées sur la programmation DC et DCA. Dans les chapitres 3 et 4, nous reformulons les problèmes étudiés sous forme d'un programme DC standard (minimisation d'une fonction DC avec les contraintes convexes). La structure spécifique est bien exploitée afin de concevoir des schémas DCA standard efficaces où les sous-problèmes convexes de ces schémas sont résolus soit explicitement soit de manière peu coûteuse. Les problèmes d'optimisation dans le chapitre 5 sont reformulés comme les programmes DC généraux et les schémas [...] / Wireless communication plays an increasingly important role in many aspects of life. A lot of applications of wireless communication are exploited to serve people's life such as e-banking, e-commerce and medical service. Therefore, quality of service (QoS) as well as confidentiality and privacy of information over the wireless network are of leading interests in wireless network designs. In this dissertation, we focus on developing optimization techniques to address some problems in two topics: QoS and physical layer security. Our methods are relied on DC (Difference of Convex functions) programming and DCA (DC Algorithms) which are powerful, non-differentiable, non-convex optimization tools that have enjoyed great success over the last two decades in modelling and solving many application problems in various fields of applied science. Besides the introduction and conclusion chapters, the main content of the dissertation is divided into four chapters: the chapter 2 concerns QoS in wireless networks whereas the next three chapters tackle physical layer security. The chapter 2 discusses a criterion of QoS assessed by the minimum of signal-to-noise (SNR) ratios at receivers. The objective is to maximize the minimum SNR in order to ensure the fairness among users, avoid the case in which some users have to suffer from a very low SNR. We apply DC programming and DCA to solve the derived max-min fairness optimization problem. With the awareness that the efficiency of DCA heavily depends on the corresponding DC decomposition, we recast the considered problem as a general DC program (minimization of a DC function on a set defined by some convex constraints and some DC constraints) using a DC decomposition different from the existing one and design a general DCA scheme to handle that problem. The numerical results reveal the efficiency of our proposed DCA compared with the existing DCA and the other methods. In addition, we rigorously prove the convergence of the proposed general DCA scheme. The common objective of the next three chapters (Chapter 3,4,5) is to guarantee security at the physical layer of wireless communication systems based on maximizing their secrecy rate. Three different architectures of the wireless system using various cooperative techniques are considered in these three chapters. More specifically, a point-to-point wireless system including single eavesdropper and employing cooperative jamming technique is considered in the chapter 3. Chapter 4 is about a relay wireless system including single eavesdropper and using a combination of beamforming technique and cooperative relaying technique with two relaying protocols Amplify-and-Forward (AF) and Decode-and-Forward (DF). Chapter 5 concerns a more general relay wireless system than the chapter 4, in which multiple eavesdroppers are considered instead of single eavesdropper. The difference in architecture of wireless systems as well as in the utilized cooperative techniques result in three mathematically different optimization problems. The unified approach based on DC programming and DCA is proposed to deal with these problems. The special structures of the derived optimization problems in the chapter 3 and the chapter 4 are exploited and explored to design efficient standard DCA schemes in the sense that the convex subproblems in these schemes are solved either explicitly or in an inexpensive way. The max-min forms of the optimization problems in the chapter 5 are reformulated as the general DC programs with DC constraints and the general DCA schemes are developed to address these problems. The results obtained by DCA show the efficiency of our approach in comparison with the existing methods. The convergence of the proposed general DCA schemes is thoroughly shown
|
Page generated in 0.0327 seconds