• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 18
  • 16
  • 4
  • Tagged with
  • 85
  • 22
  • 19
  • 11
  • 10
  • 8
  • 7
  • 6
  • 5
  • 5
  • 5
  • 5
  • 5
  • 5
  • 4
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
31

Improvements on the bees algorithm for continuous optimisation problems

Kamsani, Silah Hayati January 2016 (has links)
This work focuses on the improvements of the Bees Algorithm in order to enhance the algorithm’s performance especially in terms of convergence rate. For the first enhancement, a pseudo-gradient Bees Algorithm (PG-BA) compares the fitness as well as the position of previous and current bees so that the best bees in each patch are appropriately guided towards a better search direction after each consecutive cycle. This method eliminates the need to differentiate the objective function which is unlike the typical gradient search method. The improved algorithm is subjected to several numerical benchmark test functions as well as the training of neural network. The results from the experiments are then compared to the standard variant of the Bees Algorithm and other swarm intelligence procedures. The data analysis generally confirmed that the PG-BA is effective at speeding up the convergence time to optimum. Next, an approach to avoid the formation of overlapping patches is proposed. The Patch Overlap Avoidance Bees Algorithm (POA-BA) is designed to avoid redundancy in search area especially if the site is deemed unprofitable. This method is quite similar to Tabu Search (TS) with the POA-BA forbids the exact exploitation of previously visited solutions along with their corresponding neighbourhood. Patches are not allowed to intersect not just in the next generation but also in the current cycle. This reduces the number of patches materialise in the same peak (maximisation) or valley (minimisation) which ensures a thorough search of the problem landscape as bees are distributed around the scaled down area. The same benchmark problems as PG-BA were applied against this modified strategy to a reasonable success. Finally, the Bees Algorithm is revised to have the capability of locating all of the global optimum as well as the substantial local peaks in a single run. These multi-solutions of comparable fitness offers some alternatives for the decision makers to choose from. The patches are formed only if the bees are the fittest from different peaks by using a hill-valley mechanism in this so called Extended Bees Algorithm (EBA). This permits the maintenance of diversified solutions throughout the search process in addition to minimising the chances of getting trap. This version is proven beneficial when tested with numerous multimodal optimisation problems.
32

Evolutionary approaches for portfolio optimization

Lwin, Khin Thein January 2015 (has links)
Portfolio optimization involves the optimal assignment of limited capital to different available financial assets to achieve a reasonable trade-off between profit and risk objectives. Markowitz’s mean variance (MV) model is widely regarded as the foundation of modern portfolio theory and provides a quantitative framework for portfolio optimization problems. In real market, investors commonly face real-world trading restrictions and it requires that the constructed portfolios have to meet trading constraints. When additional constraints are added to the basic MV model, the problem thus becomes more complex and the exact optimization approaches run into difficulties to deliver solutions within reasonable time for large problem size. By introducing the cardinality constraint alone already transformed the classic quadratic optimization model into a mixed-integer quadratic programming problem which is an NP-hard problem. Evolutionary algorithms, a class of metaheuristics, are one of the known alternatives for optimization problems that are too complex to be solved using deterministic techniques. This thesis focuses on single-period portfolio optimization problems with practical trading constraints and two different risk measures. Four hybrid evolutionary algorithms are presented to efficiently solve these problems with gradually more complex real world constraints. In the first part of the thesis, the mean variance portfolio model is investigated by taking into account real-world constraints. A hybrid evolutionary algorithm (PBILDE) for portfolio optimization with cardinality and quantity constraints is presented. The proposed PBILDE is able to achieve a strong synergetic effect through hybridization of PBIL and DE. A partially guided mutation and an elitist update strategy are proposed in order to promote the efficient convergence of PBILDE. Its effectiveness is evaluated and compared with other existing algorithms over a number of datasets. A multi-objective scatter search with archive (MOSSwA) algorithm for portfolio optimization with cardinality, quantity and pre-assignment constraints is then presented. New subset generations and solution combination methods are proposed to generate efficient and diverse portfolios. A learning-guided multi-objective evolutionary (MODEwAwL) algorithm for the portfolio optimization problems with cardinality, quantity, pre-assignment and round lot constraints is presented. A learning mechanism is introduced in order to extract important features from the set of elite solutions. Problem-specific selection heuristics are introduced in order to identify high-quality solutions with a reduced computational cost. An efficient and effective candidate generation scheme utilizing a learning mechanism, problem specific heuristics and effective direction-based search methods is proposed to guide the search towards the promising regions of the search space. In the second part of the thesis, an alternative risk measure, VaR, is considered. A non-parametric mean-VaR model with six practical trading constraints is investigated. A multi-objective evolutionary algorithm with guided learning (MODE-GL) is presented for the mean-VaR model. Two different variants of DE mutation schemes in the solution generation scheme are proposed in order to promote the exploration of the search towards the least crowded region of the solution space. Experimental results using historical daily financial market data from S &P 100 and S & P 500 indices are presented. When the cardinality constraints are considered, incorporating a learning mechanism significantly promotes the efficient convergence of the search.
33

Randomized coordinate descent methods for big data optimization

Takac, Martin January 2014 (has links)
This thesis consists of 5 chapters. We develop new serial (Chapter 2), parallel (Chapter 3), distributed (Chapter 4) and primal-dual (Chapter 5) stochastic (randomized) coordinate descent methods, analyze their complexity and conduct numerical experiments on synthetic and real data of huge sizes (GBs/TBs of data, millions/billions of variables). In Chapter 2 we develop a randomized coordinate descent method for minimizing the sum of a smooth and a simple nonsmooth separable convex function and prove that it obtains an ε-accurate solution with probability at least 1 - p in at most O((n/ε) log(1/p)) iterations, where n is the number of blocks. This extends recent results of Nesterov [43], which cover the smooth case, to composite minimization, while at the same time improving the complexity by the factor of 4 and removing ε from the logarithmic term. More importantly, in contrast with the aforementioned work in which the author achieves the results by applying the method to a regularized version of the objective function with an unknown scaling factor, we show that this is not necessary, thus achieving first true iteration complexity bounds. For strongly convex functions the method converges linearly. In the smooth case we also allow for arbitrary probability vectors and non-Euclidean norms. Our analysis is also much simpler. In Chapter 3 we show that the randomized coordinate descent method developed in Chapter 2 can be accelerated by parallelization. The speedup, as compared to the serial method, and referring to the number of iterations needed to approximately solve the problem with high probability, is equal to the product of the number of processors and a natural and easily computable measure of separability of the smooth component of the objective function. In the worst case, when no degree of separability is present, there is no speedup; in the best case, when the problem is separable, the speedup is equal to the number of processors. Our analysis also works in the mode when the number of coordinates being updated at each iteration is random, which allows for modeling situations with variable (busy or unreliable) number of processors. We demonstrate numerically that the algorithm is able to solve huge-scale l1-regularized least squares problems with a billion variables. In Chapter 4 we extended coordinate descent into a distributed environment. We initially partition the coordinates (features or examples, based on the problem formulation) and assign each partition to a different node of a cluster. At every iteration, each node picks a random subset of the coordinates from those it owns, independently from the other computers, and in parallel computes and applies updates to the selected coordinates based on a simple closed-form formula. We give bounds on the number of iterations sufficient to approximately solve the problem with high probability, and show how it depends on the data and on the partitioning. We perform numerical experiments with a LASSO instance described by a 3TB matrix. Finally, in Chapter 5, we address the issue of using mini-batches in stochastic optimization of Support Vector Machines (SVMs). We show that the same quantity, the spectral norm of the data, controls the parallelization speedup obtained for both primal stochastic subgradient descent (SGD) and stochastic dual coordinate ascent (SCDA) methods and use it to derive novel variants of mini-batched (parallel) SDCA. Our guarantees for both methods are expressed in terms of the original nonsmooth primal problem based on the hinge-loss. Our results in Chapters 2 and 3 are cast for blocks (groups of coordinates) instead of coordinates, and hence the methods are better described as block coordinate descent methods. While the results in Chapters 4 and 5 are not formulated for blocks, they can be extended to this setting.
34

Στατιστικές συναρτήσεις σάρωσης και αξιοπιστία συστημάτων / Scan statistics and systems' reliability

Πήττα, Θεοδώρα 22 December 2009 (has links)
Σκοπός της εργασίας είναι η σύνδεση της στατιστικής συνάρτησης σάρωσης S_(n,m), που εκφράζει τον μέγιστο αριθμό των επιτυχιών που περιέχονται σε ένα κινούμενο παράθυρο μήκους m το οποίο “σαρώνει” n - συνεχόμενες προσπάθειες Bernoulli, με την αξιοπιστία ενός συνεχόμενου k-μεταξύ-m-από-τα-n συστήματος αποτυχίας (k-μεταξύ-m-από-τα-n:F σύστημα). Αρχικά υπολογίζουμε τη συνάρτηση κατανομής και τη συνάρτηση πιθανότητας της στατιστικής συνάρτησης σάρωσης S_(n,m). Αυτό το επιτυγχάνουμε συνδέοντας την S_(n,m) με την τυχαία μεταβλητή T_k^((m))που εκφράζει τον χρόνο αναμονής μέχρι να συμβεί μια γενικευμένη ροή ή αλλιώς μέχρι να συμβεί η “πρώτη σάρωση” σε μια ακολουθία τυχαίων μεταβλητών Bernoulli οι οποίες παίρνουν τιμές 0 ή 1 ανάλογα με το αν έχουμε αποτυχία ή επιτυχία, αντίστοιχα. Υπολογίζουμε τη συνάρτηση κατανομής και τη συνάρτηση πιθανότητας της T_k^((m)) είτε με τη μέθοδο της εμβάπτισης σε Μαρκοβιανή αλυσίδα είτε μέσω αναδρομικών τύπων και παίρνουμε τις αντίστοιχες συναρτήσεις για την τυχαία μεταβλητή S_(n,m) [Glaz and Balakrishnan (1999), Balakrishnan and Koutras (2001)]. Στη συνέχεια ασχολούμαστε με την αξιοπιστία του συνεχόμενου k-μεταξύ-m-από-τα-n:F συστήματος (Griffith, 1986). Ένα τέτοιο σύστημα αποτυγχάνει αν ανάμεσα σε m συνεχόμενες συνιστώσες υπάρχουν τουλάχιστον k που αποτυγχάνουν (1≤k≤m≤n). Παρουσιάζουμε ακριβείς τύπους για την αξιοπιστία για k=2 καθώς και για m=n,n-1,n-2,n-3 (Sfakianakis, Kounias and Hillaris, 1992) και δίνουμε έναν αναδρομικό αλγόριθμο για τον υπολογισμό της (Malinowski and Preuss, 1994). Χρησιμοποιώντας μια δυϊκή σχέση ανάμεσα στη συνάρτηση κατανομής της T_k^((m)) και κατ’ επέκταση της S_(n,m) με την αξιοπιστία, συνδέουμε την αξιοπιστία αυτού του συστήματος με τη στατιστική συνάρτηση σάρωσης S_(n,m). Τέλος σκιαγραφούμε κάποιες εφαρμογές των στατιστικών συναρτήσεων σάρωσης στην μοριακή βιολογία [Karlin and Ghandour (1985), Glaz and Naus (1991), κ.ά.], στον ποιοτικό έλεγχο [Roberts,1958] κ.τ.λ.. / The aim of this dissertation is to combine the scan statistic S_(n,m), which represents the maximum number of successes contained in a moving window of length m over n consecutive Bernoulli trials, with the reliability of a consecutive k-within-m-out-of-n failure system (k-within-m-out-of-n:F system). First, we evaluate the probability mass function and the cumulative distribution function of the random variable S_(n,m). We obtain that by combining S_(n,m) with the random variable T_k^((m)) which denotes the waiting time until for the first time k successes are contained in a moving window of length m (scan of type k/m) over a sequence of Bernoulli trials with 1 marked as a success and 0 as a failure. The probability mass function and the cumulative distribution function of T_k^((m)) are evaluated using two methods: i. Markov chain embedding method and ii. recursive schemes. Finally, through T_k^((m)) we evaluate the probability mass function and the cumulative distribution function of S_(n,m) [Glaz and Balakrishnan (1999), Balakrishnan and Koutras (2002)]. Next, we evaluate the reliability, R, of the consecutive k-within-m-out-of-n failure system (Griffith, 1986). Such a system fails if and only if there exist m consecutive components which include among them at least k failed ones (1≤k≤m≤n). Exact formulae for the reliability are presented for k=2 as well as for m=n,n-1,n-2,n-3 (Sfakianakis, Kounias and Hillaris, 1992). A recursive algorithm for the reliability evaluation is also given (Malinowski and Preuss, 1994). Using a dual relation between the cumulative distribution function of T_k^((m)) and therefore of S_(n,m) and the reliability R, we manage to combine the reliability of this system with the scan statistic S_(n,m). Finally, we briefly present some other applications of the scan statistics in molecular biology [Karlin and Ghandour (1985), Glaz and Naus (1991), e.t.c.], quality control [Roberts,1958] and other more.
35

Analysis of the fitness landscape for the class of combinatorial optimisation problems

Tayarani-Najar, M.-H. January 2013 (has links)
Anatomy of the fitness landscape for a group of well known combinatorial optimisation problems is studied in this research and the similarities and the differences between their landscapes are pointed out. In this research we target the analysis of the fitness landscape for MAX-SAT, Graph-Colouring, Travelling Salesman and Quadratic Assignment problems. Belonging to the class of NP-Hard problems, all these problems become exponentially harder as the problem size grows. We study a group of properties of the fitness landscape for these problems and show what properties are shared by different problems and what properties are different. The properties we investigate here include the time it takes for a local search algorithm to find a local optimum, the number of local and global optima, distance between local and global optima, expected cost of found optima, probability of reaching a global optimum and the cost of the best configuration in the search space. The relationship between these properties and the system size and other parameters of the problems are studied, and it is shown how these properties are shared or differ in different problems. We also study the long-range correlation within the search space, including the expected cost in the Hamming sphere around the local and global optima, the basin of attraction of the local and global optima and the probability of finding a local optimum as a function of its cost. We believe these information provide good insight for algorithm designers.
36

Stochastic programming for hydro-thermal unit commitment

Schulze, Tim January 2015 (has links)
In recent years the deregulation of energy markets and expansion of volatile renewable energy supplies has triggered an increased interest in stochastic optimization models for thermal and hydro-thermal scheduling. Several studies have modelled this as stochastic linear or mixed-integer optimization problems. Although a variety of efficient solution techniques have been developed for these models, little is published about the added value of stochastic models over deterministic ones. In the context of day-ahead and intraday unit commitment under wind uncertainty, we compare two-stage and multi-stage stochastic models to deterministic ones and quantify their added value. We show that stochastic optimization models achieve minimal operational cost without having to tune reserve margins in advance, and that their superiority over deterministic models grows with the amount of uncertainty in the relevant wind forecasts. We present a modification of the WILMAR scenario generation technique designed to match the properties of the errors in our wind forcasts, and show that this is needed to make the stochastic approach worthwhile. Our evaluation is done in a rolling horizon fashion over the course of two years, using a 2020 central scheduling model of the British National Grid with transmission constraints and a detailed model of pump storage operation and system-wide reserve and response provision. Solving stochastic problems directly is computationally intractable for large instances, and alternative approaches are required. In this study we use a Dantzig-Wolfe reformulation to decompose the problem by scenarios. We derive and implement a column generation method with dual stabilisation and novel primal and dual initialisation techniques. A fast, novel schedule combination heuristic is used to construct an optimal primal solution, and numerical results show that knowing this solution from the start also improves the convergence of the lower bound in the column generation method significantly. We test this method on instances of our British model and illustrate that convergence to within 0.1% of optimality can be achieved quickly.
37

Nonlinear dynamics of pattern recognition and optimization

Marsden, Christopher J. January 2012 (has links)
We associate learning in living systems with the shaping of the velocity vector field of a dynamical system in response to external, generally random, stimuli. We consider various approaches to implement a system that is able to adapt the whole vector field, rather than just parts of it - a drawback of the most common current learning systems: artificial neural networks. This leads us to propose the mathematical concept of self-shaping dynamical systems. To begin, there is an empty phase space with no attractors, and thus a zero velocity vector field. Upon receiving the random stimulus, the vector field deforms and eventually becomes smooth and deterministic, despite the random nature of the applied force, while the phase space develops various geometrical objects. We consider the simplest of these - gradient self-shaping systems, whose vector field is the gradient of some energy function, which under certain conditions develops into the multi-dimensional probability density distribution of the input. We explain how self-shaping systems are relevant to artificial neural networks. Firstly, we show that they can potentially perform pattern recognition tasks typically implemented by Hopfield neural networks, but without any supervision and on-line, and without developing spurious minima in the phase space. Secondly, they can reconstruct the probability density distribution of input signals, like probabilistic neural networks, but without the need for new training patterns to have to enter the network as new hardware units. We therefore regard self-shaping systems as a generalisation of the neural network concept, achieved by abandoning the "rigid units - flexible couplings'' paradigm and making the vector field fully flexible and amenable to external force. It is not clear how such systems could be implemented in hardware, and so this new concept presents an engineering challenge. It could also become an alternative paradigm for the modelling of both living and learning systems. Mathematically it is interesting to find how a self shaping system could develop non-trivial objects in the phase space such as periodic orbits or chaotic attractors. We investigate how a delayed vector field could form such objects. We show that this method produces chaos in a class systems which have very simple dynamics in the non-delayed case. We also demonstrate the coexistence of bounded and unbounded solutions dependent on the initial conditions and the value of the delay. Finally, we speculate about how such a method could be used in global optimization.
38

Solving cardinality constrained portfolio optimisation problem using genetic algorithms and ant colony optimisation

Li, Yibo January 2015 (has links)
In this thesis we consider solution approaches for the index tacking problem, in which we aim to reproduces the performance of a market index without purchasing all of the stocks that constitute the index. We solve the problem using three different solution approaches: Mixed Integer Programming (MIP), Genetic Algorithms (GAs), and Ant-colony Optimization (ACO) Algorithm by limiting the number of stocks that can be held. Each index is also assigned with different cardinalities to examine the change to the solution values. All of the solution approaches are tested by considering eight market indices. The smallest data set only consists of 31 stocks whereas the largest data set includes over 2000 stocks. The computational results from the MIP are used as the benchmark to measure the performance of the other solution approaches. The Computational results are presented for different solution approaches and conclusions are given. Finally, we implement post analysis and investigate the best tracking portfolios achieved from the three solution approaches. We summarise the findings of the investigation, and in turn, we further improve some of the algorithms. As the formulations of these problems are mixed-integer linear programs, we use the solver ‘Cplex’ to solve the problems. All of the programming is coded in AMPL.
39

Hyper-heuristics and fairness in examination timetabling problems

Muklason, Ahmad January 2017 (has links)
Examination timetabling is a challenging optimisation problem in operations research and artificial intelligence. The main aim is to spread exams evenly throughout the overall time period to facilitate student comfort and success; however, existing examination timetabling solvers neglect fairness by optimising the sum or average of the objective function value without considering its distribution among students or other stakeholders. The balance between quality of the overall timetable and fairness (global fairness and within a cohort) is a major concern, thus the latter is added as a new objective function and quality indicator of examination timetables. The objective function is also considered from the perspectives of multiple stakeholders of examination timetabling (i.e. students, invigilators, markers and estates), as opposed to viewing the objective function as an aggregate function. These notions make the problem become a multi-objective optimisation problem. We study sum of power rather than linear summation to enforce fairness and concurrently minimise the objective function, using some perturbation-based hyper- heuristics approaches to optimise the standard objective function. Secondly, multi-stage approach is studied (generating initial feasible solution, improving the standard quality of solution and then improving fairness), to improve the fairness objective function. Given that the standard objective function and fairness objective function conflict, we then studied several multi-objective algorithms employed within the framework of hyper-heuristics. The proposed hyper-heuristic algorithms mainly can be divided into two approaches: classical scalarisation technique-based weighted sum and Tchebyce↵; and population-based non-dominated sorting memetic algorithm II (NSMA-II) and artificial bee colony and strength pareto evolutionary 2 (SPEA2) hybrid (ABC-SPEA2). The experiments were conducted over two multi-objective examination timetabling problem formulations (i.e. with fairness and with multiple stakeholder perspectives), tested over problem instances from four different datasets: Carter, Nottingham, Yeditepe and ITC 2007. The experimental results over multi-objective examination timetabling problem with fairness showed that in terms of the standard objective function the proposed approach could produce results comparable with the best known solutions reported in the literature, whilst in the same time could be forced to be fairer that does or does not compensate on worsening the standard objective function. Fairness within a cohort could be improved much better than global fairness and treating as multi-objective problem could help the search for near-optimal standard objective function escape from local optima trap. The scalarisation technique based hyper-heuristics outperforms the population-based hyper-heuristic. The advantage of treating examination timetabling problem as multi-objective problem is that approximations of the Pareto optimal solutions give the optimal trade-o↵ between standard objective function, fairness among all students, and fairness within a cohort. In addition, the decision maker also can view the solution from multiple stakeholders view. We believe that by giving this more detailed information, the decision maker of examination timetable could make better decisions.
40

Apprentissage automatique pour la prise de décisions / Machine learning for decisions-making under uncertainty

Sani, Amir 12 May 2015 (has links)
La prise de décision stratégique concernant des ressources de valeur devrait tenir compte du degré d'aversion au risque. D'ailleurs, de nombreux domaines d'application mettent le risque au cœur de la prise de décision. Toutefois, ce n'est pas le cas de l'apprentissage automatique. Ainsi, il semble essentiel de devoir fournir des indicateurs et des algorithmes dotant l'apprentissage automatique de la possibilité de prendre en considération le risque dans la prise de décision. En particulier, nous souhaiterions pouvoir estimer ce dernier sur de courtes séquences dépendantes générées à partir de la classe la plus générale possible de processus stochastiques en utilisant des outils théoriques d'inférence statistique et d'aversion au risque dans la prise de décision séquentielle. Cette thèse étudie ces deux problèmes en fournissant des méthodes algorithmiques prenant en considération le risque dans le cadre de la prise de décision en apprentissage automatique. Un algorithme avec des performances de pointe est proposé pour une estimation précise des statistiques de risque avec la classe la plus générale de processus ergodiques et stochastiques. De plus, la notion d'aversion au risque est introduite dans la prise de décision séquentielle (apprentissage en ligne) à la fois dans les jeux de bandits stochastiques et dans l'apprentissage séquentiel antagoniste. / Strategic decision-making over valuable resources should consider risk-averse objectives. Many practical areas of application consider risk as central to decision-making. However, machine learning does not. As a result, research should provide insights and algorithms that endow machine learning with the ability to consider decision-theoretic risk. In particular, in estimating decision-theoretic risk on short dependent sequences generated from the most general possible class of processes for statistical inference and through decision-theoretic risk objectives in sequential decision-making. This thesis studies these two problems to provide principled algorithmic methods for considering decision-theoretic risk in machine learning. An algorithm with state-of-the-art performance is introduced for accurate estimation of risk statistics on the most general class of stationary--ergodic processes and risk-averse objectives are introduced in sequential decision-making (online learning) in both the stochastic multi-arm bandit setting and the adversarial full-information setting.

Page generated in 0.029 seconds