• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 41
  • 4
  • 3
  • 2
  • 2
  • 1
  • 1
  • 1
  • Tagged with
  • 65
  • 37
  • 24
  • 11
  • 10
  • 10
  • 8
  • 8
  • 8
  • 8
  • 7
  • 7
  • 7
  • 7
  • 6
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
31

An optimisation approach for capacity enhancement in third generation (3G) mobile networks.

Juma, Raymond Wekesa. January 2012 (has links)
M. Tech. Electrical Engineering. / This study proposes a mathematical optimisation approach which invokes Genetic Algorithm (GA) for initialisation and application of Tabu Search (TS) algorithm in finding the sites of node Bs in the network to enable it have the potential to support an increased number of users requiring the increased number of services. The global optimisation can be obtained in terms of great probability as GA is applied to global search and TS is applied to the local search. The particular memory ability of TS can be integrated to GA and the prematurity of GA can be avoided by virtue of the hill-climbing ability of TS. The problem to be addressed is the determination of optimal locations of node Bs in the network based on the user distribution, while improving the QoS. The proposed approach considers the site selection as an integer problem and the site placement as a continuous problem. The two problems are focused on concurrently - finding the optimal number of node Bs that satisfies the capacity requirements in the network and hence QoS improvement. The proposed algorithm combines the strength of Genetic and Tabu Search algorithms in successive elimination of node Bs after their random distribution in the area of study. The results showed that the proposed approach produced fewer number of node Bs sites in the network that provided the required QoS. In addition, it exhibited high fitness function in the simulations meaning that it has the higher ability of achieving the objective function when it was compared to TS and GA.
32

Computational convex analysis : from continuous deformation to finite convex integration

Trienis, Michael Joseph 05 1900 (has links)
After introducing concepts from convex analysis, we study how to continuously transform one convex function into another. A natural choice is the arithmetic average, as it is pointwise continuous; however, this choice fails to average functions with different domains. On the contrary, the proximal average is not only continuous (in the epi-topology) but can actually average functions with disjoint domains. In fact, the proximal average not only inherits strict convexity (like the arithmetic average) but also inherits smoothness and differentiability (unlike the arithmetic average). Then we introduce a computational framework for computer-aided convex analysis. Motivated by the proximal average, we notice that the class of piecewise linear-quadratic (PLQ) functions is closed under (positive) scalar multiplication, addition, Fenchel conjugation, and Moreau envelope. As a result, the PLQ framework gives rise to linear-time and linear-space algorithms for convex PLQ functions. We extend this framework to nonconvex PLQ functions and present an explicit convex hull algorithm. Finally, we discuss a method to find primal-dual symmetric antiderivatives from cyclically monotone operators. As these antiderivatives depend on the minimal and maximal Rockafellar functions [5, Theorem 3.5, Corollary 3.10], it turns out that the minimal and maximal function in [12, p.132,p.136] are indeed the same functions. Algorithms used to compute these antiderivatives can be formulated as shortest path problems.
33

Nonconvex Dynamical Problems

Rieger, Marc Oliver 28 November 2004 (has links) (PDF)
Many problems in continuum mechanics, especially in the theory of elastic materials, lead to nonlinear partial differential equations. The nonconvexity of their underlying energy potential is a challenge for mathematical analysis, since convexity plays an important role in the classical theories of existence and regularity. In the last years one main point of interest was to develop techniques to circumvent these difficulties. One approach was to use different notions of convexity like quasi-- or polyconvexity, but most of the work was done only for static (time independent) equations. In this thesis we want to make some contributions concerning existence, regularity and numerical approximation of nonconvex dynamical problems.
34

Metodo de direções interiores ao epígrafo - IED para otimização não diferenciável e não convexa via Dualidade Lagrangeana: estratégias para minimização da Lagrangeana aumentada

Franco, Hernando José Rocha 08 June 2018 (has links)
Submitted by Geandra Rodrigues (geandrar@gmail.com) on 2018-07-12T12:23:47Z No. of bitstreams: 1 hernandojoserochafranco.pdf: 1674623 bytes, checksum: f6df7317dd6a8e94e51045dbf75e8241 (MD5) / Approved for entry into archive by Adriana Oliveira (adriana.oliveira@ufjf.edu.br) on 2018-07-17T11:56:13Z (GMT) No. of bitstreams: 1 hernandojoserochafranco.pdf: 1674623 bytes, checksum: f6df7317dd6a8e94e51045dbf75e8241 (MD5) / Made available in DSpace on 2018-07-17T11:56:13Z (GMT). No. of bitstreams: 1 hernandojoserochafranco.pdf: 1674623 bytes, checksum: f6df7317dd6a8e94e51045dbf75e8241 (MD5) Previous issue date: 2018-06-08 / A teoria clássica de otimização presume a existência de certas condições, por exemplo, que as funções envolvidas em um problema desta natureza sejam pelo menos uma vez continuamente diferenciáveis. Entretanto, em muitas aplicações práticas que requerem o emprego de métodos de otimização, essa característica não se encontra presente. Problemas de otimização não diferenciáveis são considerados mais difíceis de lidar. Nesta classe, aqueles que envolvem funções não convexas são ainda mais complexos. O Interior Epigraph Directions (IED) é um método de otimização que se baseia na teoria da Dualidade Lagrangeana e se aplica à resolução de problemas não diferenciáveis, não convexos e com restrições. Neste estudo, apresentamos duas novas versões para o referido método a partir de implementações computacionais de outros algoritmos. A primeira versão, denominada IED+NFDNA, recebeu a incorporação de uma implementação do algoritmo Nonsmooth Feasible Direction Nonconvex Algorithm (NFDNA). Esta versão, ao ser aplicada em experimentos numéricos com problemas teste da literatura, apresentou desempenho satisfatório quando comparada ao IED original e a outros solvers de otimização. Com o objetivo de aperfeiçoar mais o método, reduzindo sua dependência de parâmetros iniciais e também do cálculo de subgradientes, uma segunda versão, IED+GA, foi desenvolvida com a utilização de algoritmos genéticos. Além da resolução de problemas teste, o IED-FGA obteve bons resultados quando aplicado a problemas de engenharia. / The classical theory of optimization assumes the existence of certain conditions, for example, that the functions involved in a problem of this nature are at least once continuously differentiable. However, in many practical applications that require the use of optimization methods, this characteristic is not present. Non-differentiable optimization problems are considered more difficult to deal with. In this class, those involving nonconvex functions are even more complex. Interior Epigraph Directions (IED) is an optimization method that is based on Lagrangean duality theory and applies to the resolution of non-differentiable, non-convex and constrained problems. In this study, we present two new versions for this method from computational implementations of other algorithms. The first version, called IED + NFDNA, received the incorporation of an implementation of the Nonsmooth Feasible Direction Nonconvex Algorithm (NFDNA) algorithm. This version, when applied in numerical experiments with problems in the literature, presented satisfactory performance when compared to the original IED and other optimization solvers. A second version, IED + GA, was developed with the use of genetic algorithms in order to further refine the method, reducing its dependence on initial parameters and also on the calculation of subgradients. In addition to solving test problems, IED + GA achieved good results when applied to engineering problems.
35

Computational convex analysis : from continuous deformation to finite convex integration

Trienis, Michael Joseph 05 1900 (has links)
After introducing concepts from convex analysis, we study how to continuously transform one convex function into another. A natural choice is the arithmetic average, as it is pointwise continuous; however, this choice fails to average functions with different domains. On the contrary, the proximal average is not only continuous (in the epi-topology) but can actually average functions with disjoint domains. In fact, the proximal average not only inherits strict convexity (like the arithmetic average) but also inherits smoothness and differentiability (unlike the arithmetic average). Then we introduce a computational framework for computer-aided convex analysis. Motivated by the proximal average, we notice that the class of piecewise linear-quadratic (PLQ) functions is closed under (positive) scalar multiplication, addition, Fenchel conjugation, and Moreau envelope. As a result, the PLQ framework gives rise to linear-time and linear-space algorithms for convex PLQ functions. We extend this framework to nonconvex PLQ functions and present an explicit convex hull algorithm. Finally, we discuss a method to find primal-dual symmetric antiderivatives from cyclically monotone operators. As these antiderivatives depend on the minimal and maximal Rockafellar functions [5, Theorem 3.5, Corollary 3.10], it turns out that the minimal and maximal function in [12, p.132,p.136] are indeed the same functions. Algorithms used to compute these antiderivatives can be formulated as shortest path problems. / Graduate Studies, College of (Okanagan) / Graduate
36

Distributed Optimization with Nonconvexities and Limited Communication

Magnússon, Sindri January 2016 (has links)
In economical and sustainable operation of cyber-physical systems, a number of entities need to often cooperate over a communication network to solve optimization problems. A challenging aspect in the design of robust distributed solution algorithms to these optimization problems is that as technology advances and the networks grow larger, the communication bandwidth used to coordinate the solution is limited. Moreover, even though most research has focused distributed convex optimization, in cyberphysical systems nonconvex problems are often encountered, e.g., localization in wireless sensor networks and optimal power flow in smart grids, the solution of which poses major technical difficulties. Motivated by these challenges this thesis investigates distributed optimization with emphasis on limited communication for both convex and nonconvex structured problems. In particular, the thesis consists of four articles as summarized below. The first two papers investigate the convergence of distributed gradient solution methods for the resource allocation optimization problem, where gradient information is communicated at every iteration, using limited communication. In particular, the first paper investigates how distributed dual descent methods can perform demand-response in power networks by using one-way communication. To achieve the one-way communication, the power supplier first broadcasts a coordination signal to the users and then updates the coordination signal by using physical measurements related to the aggregated power usage. Since the users do not communicate back to the supplier, but instead they only take a measurable action, it is essential that the algorithm remains primal feasible at every iteration to avoid blackouts. The paper demonstrates how such blackouts can be avoided by appropriately choosing the algorithm parameters. Moreover, the convergence rate of the algorithm is investigated. The second paper builds on the work of the first paper and considers more general resource allocation problem with multiple resources. In particular, a general class of quantized gradient methods are studied where the gradient direction is approximated by a finite quantization set. Necessary and sufficient conditions on the quantization set are provided to guarantee the ability of these methods to solve a large class of dual problems. A lower bound on the cardinality of the quantization set is provided, along with specific examples of minimal quantizations. Furthermore, convergence rate results are established that connect the fineness of the quantization and number of iterations needed to reach a predefined solution accuracy. The results provide a bound on the number of bits needed to achieve the desired accuracy of the optimal solution. The third paper investigates a particular nonconvex resource allocation problem, the Optimal Power Flow (OPF) problem, which is of central importance in the operation of power networks. An efficient novel method to address the general nonconvex OPF problem is investigated, which is based on the Alternating Direction Method of Multipliers (ADMM) combined with sequential convex approximations. The global OPF problem is decomposed into smaller problems associated to each bus of the network, the solutions of which are coordinated via a light communication protocol. Therefore, the proposed method is highly scalable. The convergence properties of the proposed algorithm are mathematically and numerically substantiated. The fourth paper builds on the third paper and investigates the convergence of distributed algorithms as in the third paper but for more general nonconvex optimization problems. In particular, two distributed solution methods, including ADMM, that combine the fast convergence properties of augmented Lagrangian-based methods with the separability properties of alternating optimization are investigated. The convergence properties of these methods are investigated and sufficient conditions under which the algorithms asymptotically reache the first order necessary conditions for optimality are established. Finally, the results are numerically illustrated on a nonconvex localization problem in wireless sensor networks. The results of this thesis advocate the promising convergence behaviour of some distributed optimization algorithms on nonconvex problems. Moreover, the results demonstrate the potential of solving convex distributed resource allocation problems using very limited communication bandwidth. Future work will consider how even more general convex and nonconvex problems can be solved using limited communication bandwidth and also study lower bounds on the bandwidth needed to solve general resource allocation optimization problems. / <p>QC 20160203</p>
37

Multicategory psi-learning and support vector machine

Liu, Yufeng 18 June 2004 (has links)
No description available.
38

Techniques avancées d'apprentissage automatique basées sur la programmation DC et DCA / Advanced machine learning techniques based on DC programming and DCA

Ho, Vinh Thanh 08 December 2017 (has links)
Dans cette thèse, nous développons certaines techniques avancées d'apprentissage automatique dans le cadre de l'apprentissage en ligne et de l'apprentissage par renforcement (« reinforcement learning » en anglais -- RL). L'épine dorsale de nos approches est la programmation DC (Difference of Convex functions) et DCA (DC Algorithm), et leur version en ligne, qui sont reconnues comme de outils puissants d'optimisation non convexe, non différentiable. Cette thèse se compose de deux parties : la première partie étudie certaines techniques d'apprentissage automatique en mode en ligne et la deuxième partie concerne le RL en mode batch et mode en ligne. La première partie comprend deux chapitres correspondant à la classification en ligne (chapitre 2) et la prédiction avec des conseils d'experts (chapitre 3). Ces deux chapitres mentionnent une approche unifiée d'approximation DC pour différents problèmes d'optimisation en ligne dont les fonctions objectives sont des fonctions de perte 0-1. Nous étudions comment développer des algorithmes DCA en ligne efficaces en termes d'aspects théoriques et computationnels. La deuxième partie se compose de quatre chapitres (chapitres 4, 5, 6, 7). Après une brève introduction du RL et ses travaux connexes au chapitre 4, le chapitre 5 vise à fournir des techniques efficaces du RL en mode batch basées sur la programmation DC et DCA. Nous considérons quatre différentes formulations d'optimisation DC en RL pour lesquelles des algorithmes correspondants basés sur DCA sont développés. Nous traitons les problèmes clés de DCA et montrons l'efficacité de ces algorithmes au moyen de diverses expériences. En poursuivant cette étude, au chapitre 6, nous développons les techniques du RL basées sur DCA en mode en ligne et proposons leurs versions alternatives. Comme application, nous abordons le problème du plus court chemin stochastique (« stochastic shortest path » en anglais -- SSP) au chapitre 7. Nous étudions une classe particulière de problèmes de SSP qui peut être reformulée comme une formulation de minimisation de cardinalité et une formulation du RL. La première formulation implique la norme zéro et les variables binaires. Nous proposons un algorithme basé sur DCA en exploitant une approche d'approximation DC de la norme zéro et une technique de pénalité exacte pour les variables binaires. Pour la deuxième formulation, nous utilisons un algorithme batch RL basé sur DCA. Tous les algorithmes proposés sont testés sur des réseaux routiers artificiels / In this dissertation, we develop some advanced machine learning techniques in the framework of online learning and reinforcement learning (RL). The backbones of our approaches are DC (Difference of Convex functions) programming and DCA (DC Algorithm), and their online version that are best known as powerful nonsmooth, nonconvex optimization tools. This dissertation is composed of two parts: the first part studies some online machine learning techniques and the second part concerns RL in both batch and online modes. The first part includes two chapters corresponding to online classification (Chapter 2) and prediction with expert advice (Chapter 3). These two chapters mention a unified DC approximation approach to different online learning algorithms where the observed objective functions are 0-1 loss functions. We thoroughly study how to develop efficient online DCA algorithms in terms of theoretical and computational aspects. The second part consists of four chapters (Chapters 4, 5, 6, 7). After a brief introduction of RL and its related works in Chapter 4, Chapter 5 aims to provide effective RL techniques in batch mode based on DC programming and DCA. In particular, we first consider four different DC optimization formulations for which corresponding attractive DCA-based algorithms are developed, then carefully address the key issues of DCA, and finally, show the computational efficiency of these algorithms through various experiments. Continuing this study, in Chapter 6 we develop DCA-based RL techniques in online mode and propose their alternating versions. As an application, we tackle the stochastic shortest path (SSP) problem in Chapter 7. Especially, a particular class of SSP problems can be reformulated in two directions as a cardinality minimization formulation and an RL formulation. Firstly, the cardinality formulation involves the zero-norm in objective and the binary variables. We propose a DCA-based algorithm by exploiting a DC approximation approach for the zero-norm and an exact penalty technique for the binary variables. Secondly, we make use of the aforementioned DCA-based batch RL algorithm. All proposed algorithms are tested on some artificial road networks
39

Descent dynamical systems and algorithms for tame optimization, and multi-objective problems / Systèmes dynamiques de descente et algorithmes pour l'optimisation modérée, et les problèmes multi-objectif

Garrigos, Guillaume 02 November 2015 (has links)
Dans une première partie, nous nous intéressons aux systèmes dynamiques gradients gouvernés par des fonctions non lisses, mais aussi non convexes, satisfaisant l'inégalité de Kurdyka-Lojasiewicz. Après avoir obtenu quelques résultats préliminaires pour la dynamique de la plus grande pente continue, nous étudions un algorithme de descente général. Nous prouvons, sous une hypothèse de compacité, que tout suite générée par ce schéma général converge vers un point critique de la fonction. Nous obtenons aussi de nouveaux résultats sur la vitesse de convergence, tant pour les valeurs que pour les itérés. Ce schéma général couvre en particulier des versions parallélisées de la méthode forward-backward, autorisant une métrique variable et des erreurs relatives. Cela nous permet par exemple de proposer une version non convexe non lisse de l'algorithme Levenberg-Marquardt. Enfin, nous proposons quelques applications de ces algorithmes aux problèmes de faisabilité, et aux problèmes inverses. Dans une seconde partie, cette thèse développe une dynamique de descente associée à des problèmes d'optimisation vectoriels sous contrainte. Pour cela, nous adaptons la dynamique de la plus grande pente usuelle aux fonctions à valeurs dans un espace ordonné par un cône convexe fermé solide. Cette dynamique peut être vue comme l'analogue continu de nombreux algorithmes développés ces dernières années. Nous avons un intérêt particulier pour les problèmes de décision multi-objectifs, pour lesquels cette dynamique de descente fait décroitre toutes les fonctions objectif au cours du temps. Nous prouvons l'existence de trajectoires pour cette dynamique continue, ainsi que leur convergence vers des points faiblement efficients. Finalement, nous explorons une nouvelle dynamique inertielle pour les problèmes multi-objectif, avec l'ambition de développer des méthodes rapides convergeant vers des équilibres de Pareto. / In a first part, we focus on gradient dynamical systems governed by non-smooth but also non-convex functions, satisfying the so-called Kurdyka-Lojasiewicz inequality.After obtaining preliminary results for a continuous steepest descent dynamic, we study a general descent algorithm. We prove, under a compactness assumption, that any sequence generated by this general scheme converges to a critical point of the function.We also obtain new convergence rates both for the values and the iterates. The analysis covers alternating versions of the forward-backward method, with variable metric and relative errors. As an example, a non-smooth and non-convex version of the Levenberg-Marquardt algorithm is detailed.Applications to non-convex feasibility problems, and to sparse inverse problems are discussed.In a second part, the thesis explores descent dynamics associated to constrained vector optimization problems. For this, we adapt the classic steepest descent dynamic to functions with values in a vector space ordered by a solid closed convex cone. It can be seen as the continuous analogue of various descent algorithms developed in the last years.We have a particular interest for multi-objective decision problems, for which the dynamic make decrease all the objective functions along time.We prove the existence of trajectories for this continuous dynamic, and show their convergence to weak efficient points.Then, we explore an inertial dynamic for multi-objective problems, with the aim to provide fast methods converging to Pareto points.
40

Nonconvex Economic Dispatch by Integrated Artificial Intelligence

Cheng, Fu-Sheng 11 June 2001 (has links)
Abstract This dissertation presents a new algorithm by integrating evolutionary programming (EP), tabu search (TS) and quadratic programming (QP), named the evolutionary-tabu quadratic programming (ETQ) method, to solve the nonconvex economic dispatch problem (NED). This problem involves the economic dispatch with valve-point effects (EDVP), economic dispatch with piecewise quadratic cost function (EDPQ), and economic dispatch with prohibited operating zones (EDPO). EDPV, EDPQ and EDPO are similar problems when ETQ was employed. The problem was solved in two phases, the cost-curve-selection subproblem, and the typical ED solving subproblem. The first phase was resolved by using a hybrid EP and TS, and the second phase by QP. In the solving process, EP with repairing strategy was used to generate feasible solutions, TS was used to prevent prematurity, and QP was used to enhance the performance. Numerical results show that the proposed method is more effective than other previously developed evolutionary computation algorithms.

Page generated in 0.0946 seconds