• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 510
  • 85
  • 53
  • 49
  • 12
  • 9
  • 9
  • 9
  • 9
  • 9
  • 9
  • 8
  • 7
  • 6
  • 6
  • Tagged with
  • 861
  • 321
  • 133
  • 94
  • 90
  • 87
  • 86
  • 78
  • 76
  • 68
  • 68
  • 67
  • 66
  • 65
  • 60
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
111

Spectral functions and smoothing techniques on Jordan algebras

Baes, Michel 22 September 2006 (has links)
Successful methods for a large class of nonlinear convex optimization problems have recently been developed. This class, known as self-scaled optimization problems, has been defined by Nesterov and Todd in 1994. As noticed by Guler in 1996, this class is best described using an algebraic structure known as Euclidean Jordan algebra, which provides an elegant and powerful unifying framework for its study. Euclidean Jordan algebras are now a popular setting for the analysis of algorithms designed for self-scaled optimization problems : dozens of research papers in optimization deal explicitely with them. This thesis proposes an extensive and self-contained description of Euclidean Jordan algebras, and shows how they can be used to design and analyze new algorithms for self-scaled optimization. Our work focuses on the so-called spectral functions on Euclidean Jordan algebras, a natural generalization of spectral functions of symmetric matrices. Based on an original variational analysis technique for Euclidean Jordan algebras, we discuss their most important properties, such as differentiability and convexity. We show how these results can be applied in order to extend several algorithms existing for linear or second-order programming to the general class of self-scaled problems. In particular, our methods allowed us to extend to some nonlinear convex problems the powerful smoothing techniques of Nesterov, and to demonstrate its excellent theoretical and practical efficiency.
112

Optimization algorithms in compressive sensing (CS) sparse magnetic resonance imaging (MRI)

Takeva-Velkova, Viliyana 01 June 2010 (has links)
Magnetic Resonance Imaging (MRI) is an essential instrument in clinical diag- nosis; however, it is burdened by a slow data acquisition process due to physical limitations. Compressive Sensing (CS) is a recently developed mathematical framework that o ers signi cant bene ts in MRI image speed by reducing the amount of acquired data without degrading the image quality. The process of image reconstruction involves solving a nonlinear constrained optimization problem. The reduction of reconstruction time in MRI is of signi cant bene t. We reformulate sparse MRI reconstruction as a Second Order Cone Program (SOCP).We also explore two alternative techniques to solving the SOCP prob- lem directly: NESTA and speci cally designed SOCP-LB. / UOIT
113

Novel Convex Optimization Approaches for VLSI Floorplanning

Luo, Chaomin January 2008 (has links)
The floorplanning problem aims to arrange a set of rectangular modules on a rectangular chip area so as to optimize an appropriate measure of performance. This problem is known to be NP-hard, and is particularly challenging if the chip dimensions are fixed. Fixed-outline floorplanning is becoming increasingly important as a tool to design flows in the hierarchical design of Application Specific Integrated Circuits and System-On-Chip. Therefore, it has recently received much attention. A two-stage convex optimization methodology is proposed to solve the fixed-outline floorplanning problem. It is a global optimization problem for wirelength minimization. In the first stage, an attractor-repeller convex optimization model provides the relative positions of the modules on the floorplan. The second stage places and sizes the modules using convex optimization. Given the relative positions of the modules from the first stage, a Voronoi diagram and Delaunay triangulation method is used to obtain a planar graph and hence a relative position matrix connecting the two stages. An efficient method for generating sparse relative position matrices and an interchange-free algorithm for local improvement of the floorplan are also presented. Experimental results on the standard benchmarks MCNC and GSRC demonstrate that we obtain significant improvements on the best results in the literature. Overlap-free and deadspace-free floorplans are achieved in a fixed outline and floorplans with any specified percentage of whitespace can be produced. Most important, our method provides a greater improvement as the number of modules increases. A very important feature of our methodology is that not only do the dimensions of the floorplans in our experiments comply with the original ones provided in the GSRC benchmark, but also zero-deadspace floorplans can be obtained. Thus, our approach is able to guarantee complete area utilization in a fixed-outline situation. Our method is also applicable to area minimization in classical floorplanning.
114

Novel Convex Optimization Approaches for VLSI Floorplanning

Luo, Chaomin January 2008 (has links)
The floorplanning problem aims to arrange a set of rectangular modules on a rectangular chip area so as to optimize an appropriate measure of performance. This problem is known to be NP-hard, and is particularly challenging if the chip dimensions are fixed. Fixed-outline floorplanning is becoming increasingly important as a tool to design flows in the hierarchical design of Application Specific Integrated Circuits and System-On-Chip. Therefore, it has recently received much attention. A two-stage convex optimization methodology is proposed to solve the fixed-outline floorplanning problem. It is a global optimization problem for wirelength minimization. In the first stage, an attractor-repeller convex optimization model provides the relative positions of the modules on the floorplan. The second stage places and sizes the modules using convex optimization. Given the relative positions of the modules from the first stage, a Voronoi diagram and Delaunay triangulation method is used to obtain a planar graph and hence a relative position matrix connecting the two stages. An efficient method for generating sparse relative position matrices and an interchange-free algorithm for local improvement of the floorplan are also presented. Experimental results on the standard benchmarks MCNC and GSRC demonstrate that we obtain significant improvements on the best results in the literature. Overlap-free and deadspace-free floorplans are achieved in a fixed outline and floorplans with any specified percentage of whitespace can be produced. Most important, our method provides a greater improvement as the number of modules increases. A very important feature of our methodology is that not only do the dimensions of the floorplans in our experiments comply with the original ones provided in the GSRC benchmark, but also zero-deadspace floorplans can be obtained. Thus, our approach is able to guarantee complete area utilization in a fixed-outline situation. Our method is also applicable to area minimization in classical floorplanning.
115

Approximate convex decomposition and its applications

Lien, Jyh-Ming 15 May 2009 (has links)
Geometric computations are essential in many real-world problems. One important issue in geometric computations is that the geometric models in these problems can be so large that computations on them have infeasible storage or computation time requirements. Decomposition is a technique commonly used to partition complex models into simpler components. Whereas decomposition into convex components results in pieces that are easy to process, such decompositions can be costly to construct and can result in representations with an unmanageable number of components. In this work, we have developed an approximate technique, called Approximate Convex Decomposition (ACD), which decomposes a given polygon or polyhedron into "approximately convex" pieces that may provide similar benefits as convex components, while the resulting decomposition is both significantly smaller (typically by orders of magnitude) and can be computed more efficently. Indeed, for many applications, an ACD can represent the important structural features of the model more accurately by providing a mechanism for ignoring less significant features, such as wrinkles and surface texture. Our study of a wide range of applications shows that in addition to providing computational efficiency, ACD also provides natural multi-resolution or hierarchical representations. In this dissertation, we provide some examples of ACD's many potential applications, such as particle simulation, mesh generation, motion planning, and skeleton extraction.
116

Polytopal digraphs and non-polytopal facet graphs /

Mihalisin, James Edward. January 2001 (has links)
Thesis (Ph. D.)--University of Washington, 2001. / Vita. Includes bibliographical references (p. 69-73).
117

Convex optimization under inexact first-order information

Lan, Guanghui. January 2009 (has links)
Thesis (Ph.D)--Industrial and Systems Engineering, Georgia Institute of Technology, 2009. / Committee Chair: Arkadi Nemirovski; Committee Co-Chair: Alexander Shapiro; Committee Co-Chair: Renato D. C. Monteiro; Committee Member: Anatoli Jouditski; Committee Member: Shabbir Ahmed. Part of the SMARTech Electronic Thesis and Dissertation Collection.
118

Lossless convexification of optimal control problems

Harris, Matthew Wade 30 June 2014 (has links)
This dissertation begins with an introduction to finite-dimensional optimization and optimal control theory. It then proves lossless convexification for three problems: 1) a minimum time rendezvous using differential drag, 2) a maximum divert and landing, and 3) a general optimal control problem with linear state constraints and mixed convex and non-convex control constraints. Each is a unique contribution to the theory of lossless convexification. The first proves lossless convexification in the presence of singular controls and specifies a procedure for converting singular controls to the bang-bang type. The second is the first example of lossless convexification with state constraints. The third is the most general result to date. It says that lossless convexification holds when the state space is a strongly controllable subspace. This extends the controllability concepts used previously, and it recovers earlier results as a special case. Lastly, a few of the remaining research challenges are discussed. / text
119

The Knaster-Kuratowski-Mazurkiewicz theorem and abstract convexities

Gonzalez Espinoza, Luis 05 1900 (has links)
No description available.
120

A Class of Problems where Dual Bounds Beat Underestimation Bounds

Dür, Mirjam January 2000 (has links) (PDF)
We investigate the problem of minimizing a nonconvex function with respect to convex constraints, and we study different techniques to compute a lower bound on the optimal value: The method of using convex envelope functions on one hand, and the method of exploiting nonconvex duality on the other hand. We investigate which technique gives the better bound and develop conditions under which the dual bound is strictly better than the convex envelope bound. As a byproduct, we derive some interesting results on nonconvex duality. (author's abstract) / Series: Forschungsberichte / Institut für Statistik

Page generated in 0.0235 seconds