Spelling suggestions: "subject:"nonconvex"" "subject:"nonconvexe""
61 |
Modélisation et techniques d'optimisation en bio-informatique et fouille de données / Modelling and techniques of optimization in bioinformatics and data miningBelghiti, Moulay Tayeb 01 February 2008 (has links)
Cette thèse est particulièrement destinée à traiter deux types de problèmes : clustering et l'alignement multiple de séquence. Notre objectif est de résoudre de manière satisfaisante ces problèmes globaux et de tester l'approche de la Programmation DC et DCA sur des jeux de données réelles. La thèse comporte trois parties : la première partie est consacrée aux nouvelles approches de l'optimisation non convexe. Nous y présentons une étude en profondeur de l'algorithme qui est utilisé dans cette thèse, à savoir la programmation DC et l'algorithme DC (DCA). Dans la deuxième partie, nous allons modéliser le problème clustering en trois sous-problèmes non convexes. Les deux premiers sous-problèmes se distinguent par rapport au choix de la norme utilisée, (clustering via les normes 1 et 2). Le troisième sous-problème utilise la méthode du noyau, (clustering via la méthode du noyau). La troisième partie sera consacrée à la bio-informatique. On va se focaliser sur la modélisation et la résolution de deux sous-problèmes : l'alignement multiple de séquence et l'alignement de séquence d'ARN par structure. Tous les chapitres excepté le premier se terminent par des tests numériques. / This Ph.D. thesis is particularly intended to treat two types of problems : clustering and the multiple alignment of sequence. Our objective is to solve efficiently these global problems and to test DC Programming approach and DCA on real datasets. The thesis is divided into three parts : the first part is devoted to the new approaches of nonconvex optimization-global optimization. We present it a study in depth of the algorithm which is used in this thesis, namely the programming DC and the algorithm DC ( DCA). In the second part, we will model the problem clustering in three nonconvex subproblems. The first two subproblems are distinguished compared to the choice from the norm used, (clustering via norm 1 and 2). The third subproblem uses the method of the kernel, (clustering via the method of the kernel). The third part will be devoted to bioinformatics, one goes this focused on the modeling and the resolution of two subproblems : the multiple alignment of sequence and the alignment of sequence of RNA. All the chapters except the first end in numerical tests.
|
62 |
Solutions variationnelles et solutions de viscosité de l'équation de Hamilton-Jacobi / Variational and viscosity solutions of the Hamilton-Jacobi equationRoos, Valentine 30 June 2017 (has links)
On étudie l'équation de Hamilton-Jacobi évolutive du premier ordre, couplée avec une donnée initiale lipschitzienne. Le but est de comparer les solutions de viscosité et les solutions variationnelles pour cette équation, deux notions de solutions faibles qui coïncident en dynamique hamiltonienne convexe. Pour travailler dans un cadre pertinent pour les deux types de solutions, on doit d’abord construire une solution variationnelle sans hypothèse de compacité sur la variété ou le hamiltonien étudiés. On retrace dans ce cas la construction historique des solutions variationnelles, en détaillant les propriétés de la famille génératrice obtenue par la méthode des géodésiques brisées. Il en découle des estimées permettant d’obtenir la solution de viscosité à partir de la solution variationnelle par un procédé d’itération. Après avoir vérifié que la solution variationnelle construite coïncide effectivement avec la solution de viscosité pour un Hamiltonien convexe, on caractérise les Hamiltoniens intégrables pour lesquels cette propriété persiste, en étudiant attentivement des exemples élémentaires en dimension 1 et 2. / We study the first order Hamilton-Jacobi equation associated with a Lipschitz initial condition. The purpose of this thesis is to compare two notions of weak solutions for this equation, namely the viscosity solution and the variational solution, that are known to coincide in convex Hamiltonian dynamics. In order to work in a relevant framework for both notions, we first need to build a variational solution without compactness assumption on the manifold or the Hamiltonian. To do so, we follow the historical construction, detailing properties of the generating family obtained via the broken geodesics method. Local estimates allow to prove that the viscosity solution can be obtained from the variational solution via an iterative process. We then check that this construction gives effectively the viscosity solution for a convex Hamiltonian, and characterize the integrable Hamiltonians for which this property persists by carefully studying elementary examples in dimension 1 and 2.
|
63 |
Parallel and Decentralized Algorithms for Big-data Optimization over NetworksAmir Daneshmand (11153640) 22 July 2021 (has links)
<p>Recent decades have witnessed the rise of data deluge generated by heterogeneous sources, e.g., social networks, streaming, marketing services etc., which has naturally created a surge of interests in theory and applications of large-scale convex and non-convex optimization. For example, real-world instances of statistical learning problems such as deep learning, recommendation systems, etc. can generate sheer volumes of spatially/temporally diverse data (up to Petabytes of data in commercial applications) with millions of decision variables to be optimized. Such problems are often referred to as Big-data problems. Solving these problems by standard optimization methods demands intractable amount of centralized storage and computational resources which is infeasible and is the foremost purpose of parallel and decentralized algorithms developed in this thesis.</p><p><br></p><p>This thesis consists of two parts: (I) Distributed Nonconvex Optimization and (II) Distributed Convex Optimization.</p><p><br></p><p>In Part (I), we start by studying a winning paradigm in big-data optimization, Block Coordinate Descent (BCD) algorithm, which cease to be effective when problem dimensions grow overwhelmingly. In particular, we considered a general family of constrained non-convex composite large-scale problems defined on multicore computing machines equipped with shared memory. We design a hybrid deterministic/random parallel algorithm to efficiently solve such problems combining synergically Successive Convex Approximation (SCA) with greedy/random dimensionality reduction techniques. We provide theoretical and empirical results showing efficacy of the proposed scheme in face of huge-scale problems. The next step is to broaden the network setting to general mesh networks modeled as directed graphs, and propose a class of gradient-tracking based algorithms with global convergence guarantees to critical points of the problem. We further explore the geometry of the landscape of the non-convex problems to establish second-order guarantees and strengthen our convergence to local optimal solutions results to global optimal solutions for a wide range of Machine Learning problems.</p><p><br></p><p>In Part (II), we focus on a family of distributed convex optimization problems defined over meshed networks. Relevant state-of-the-art algorithms often consider limited problem settings with pessimistic communication complexities with respect to the complexity of their centralized variants, which raises an important question: can one achieve the rate of centralized first-order methods over networks, and moreover, can one improve upon their communication costs by using higher-order local solvers? To answer these questions, we proposed an algorithm that utilizes surrogate objective functions in local solvers (hence going beyond first-order realms, such as proximal-gradient) coupled with a perturbed (push-sum) consensus mechanism that aims to track locally the gradient of the central objective function. The algorithm is proved to match the convergence rate of its centralized counterparts, up to multiplying network factors. When considering in particular, Empirical Risk Minimization (ERM) problems with statistically homogeneous data across the agents, our algorithm employing high-order surrogates provably achieves faster rates than what is achievable by first-order methods. Such improvements are made without exchanging any Hessian matrices over the network. </p><p><br></p><p>Finally, we focus on the ill-conditioning issue impacting the efficiency of decentralized first-order methods over networks which rendered them impractical both in terms of computation and communication cost. A natural solution is to develop distributed second-order methods, but their requisite for Hessian information incurs substantial communication overheads on the network. To work around such exorbitant communication costs, we propose a “statistically informed” preconditioned cubic regularized Newton method which provably improves upon the rates of first-order methods. The proposed scheme does not require communication of Hessian information in the network, and yet, achieves the iteration complexity of centralized second-order methods up to the statistical precision. In addition, (second-order) approximate nature of the utilized surrogate functions, improves upon the per-iteration computational cost of our earlier proposed scheme in this setting.</p>
|
64 |
Linearization-Based Strategies for Optimal Scheduling of a Hydroelectric Power Plant Under Uncertainty / Linearization-Based Scheduling of Hydropower SystemsTikk, Alexander January 2019 (has links)
This thesis examines the optimal scheduling of a hydroelectric power plant with cascaded reservoirs each with multiple generating units under uncertainty after testing three linearization methods. These linearization methods are Successive Linear Programming, Piecewise Linear Approximations, and a Hybrid of the two together. There are two goals of this work. The first goal of this work aims to replace the nonconvex mixed-integer nonlinear program (MINLP) with a computationally efficient linearized mixed-integer linear program (MILP) that will be capable of finding a high quality solution, preferably the global optimum. The second goal is to implement a stochastic approach on the linearized method in a pseudo-rolling horizon method which keeps the ending time step fixed. Overall, the Hybrid method proved to be a viable replacement and performs well in the pseudo-rolling horizon tests. / Thesis / Master of Applied Science (MASc)
|
65 |
Stabilised finite element approximation for degenerate convex minimisation problemsBoiger, Wolfgang Josef 19 August 2013 (has links)
Die Online-Version dieses Dokuments enthält Software, die unter den Bedingungen der GNU General Public License verbreitet wird, entweder gemäß Version 3 der Lizenz oder jeder späteren Version. Weitere Informationen über Autoren und Lizenzbedingungen befinden sich in Appendix B des Dokuments sowie in LICENSE.txt in der eingebetteten tar-Datei. Die tar-Datei kann mit geeigneter Software geöffnet werden, z.B. mit Acrobat Reader und 7-Zip, oder KDE Okular und GNU tar. / Infimalfolgen nichtkonvexer Variationsprobleme haben aufgrund feiner Oszillationen häufig keinen starken Grenzwert in Sobolevräumen. Diese Oszillationen haben eine physikalische Bedeutung; Finite-Element-Approximationen können sie jedoch im Allgemeinen nicht auflösen. Relaxationsmethoden ersetzen die nichtkonvexe Energie durch ihre (semi)konvexe Hülle. Das entstehende makroskopische Modell ist degeneriert: es ist nicht strikt konvex und hat eventuell mehrere Minimalstellen. Die fehlende Kontrolle der primalen Variablen führt zu Schwierigkeiten bei der a priori und a posteriori Fehlerschätzung, wie der Zuverlässigkeits- Effizienz-Lücke und fehlender starker Konvergenz. Zur Überwindung dieser Schwierigkeiten erweitern Stabilisierungstechniken die relaxierte Energie um einen diskreten, positiv definiten Term. Bartels et al. (IFB, 2004) wenden Stabilisierung auf zweidimensionale Probleme an und beweisen dabei starke Konvergenz der Gradienten. Dieses Ergebnis ist auf glatte Lösungen und quasi-uniforme Netze beschränkt, was adaptive Netzverfeinerungen ausschließt. Die vorliegende Arbeit behandelt einen modifizierten Stabilisierungsterm und beweist auf unstrukturierten Netzen sowohl Konvergenz der Spannungstensoren, als auch starke Konvergenz der Gradienten für glatte Lösungen. Ferner wird der sogenannte Fluss-Fehlerschätzer hergeleitet und dessen Zuverlässigkeit und Effizienz gezeigt. Für Interface-Probleme mit stückweise glatter Lösung wird eine Verfeinerung des Fehlerschätzers entwickelt, die den Fehler der primalen Variablen und ihres Gradienten beschränkt und so starke Konvergenz der Gradienten sichert. Der verfeinerte Fehlerschätzer konvergiert schneller als der Fluss- Fehlerschätzer, und verringert so die Zuverlässigkeits-Effizienz-Lücke. Numerische Experimente mit fünf Benchmark-Tests der Mikrostruktursimulation und Topologieoptimierung ergänzen und bestätigen die theoretischen Ergebnisse. / Infimising sequences of nonconvex variational problems often do not converge strongly in Sobolev spaces due to fine oscillations. These oscillations are physically meaningful; finite element approximations, however, fail to resolve them in general. Relaxation methods replace the nonconvex energy with its (semi)convex hull. This leads to a macroscopic model which is degenerate in the sense that it is not strictly convex and possibly admits multiple minimisers. The lack of control on the primal variable leads to difficulties in the a priori and a posteriori finite element error analysis, such as the reliability-efficiency gap and no strong convergence. To overcome these difficulties, stabilisation techniques add a discrete positive definite term to the relaxed energy. Bartels et al. (IFB, 2004) apply stabilisation to two-dimensional problems and thereby prove strong convergence of gradients. This result is restricted to smooth solutions and quasi-uniform meshes, which prohibit adaptive mesh refinements. This thesis concerns a modified stabilisation term and proves convergence of the stress and, for smooth solutions, strong convergence of gradients, even on unstructured meshes. Furthermore, the thesis derives the so-called flux error estimator and proves its reliability and efficiency. For interface problems with piecewise smooth solutions, a refined version of this error estimator is developed, which provides control of the error of the primal variable and its gradient and thus yields strong convergence of gradients. The refined error estimator converges faster than the flux error estimator and therefore narrows the reliability-efficiency gap. Numerical experiments with five benchmark examples from computational microstructure and topology optimisation complement and confirm the theoretical results.
|
Page generated in 0.0659 seconds