• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 21
  • 3
  • 3
  • 3
  • 3
  • 3
  • 3
  • 3
  • 1
  • 1
  • Tagged with
  • 33
  • 33
  • 7
  • 7
  • 6
  • 5
  • 5
  • 4
  • 4
  • 4
  • 4
  • 4
  • 4
  • 3
  • 3
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
21

Minkowski addition of convex sets

Meyer, Walter J. Minkowski, H. January 1969 (has links)
Thesis (Ph. D.)--University of Wisconsin--Madison, 1969. / Typescript. Vita. eContent provider-neutral record in process. Description based on print version record. Includes bibliographical references.
22

Pseudoconvexity and the envelope of holomorphy for functions of several complex variables

Mullett, Lorne Barry January 1966 (has links)
We first handle some generalizations from the theory of functions of a single complex variable, including results regarding analytic continuation. Several "theorems of continuity" are considered, along with the associated definitions of pseudoconvexity, and these are shown to be equivalent up to a special kind of transformation. By successively applying a form of analytic continuation to a function f , a set of pseudoconvex domains is constructed, and the union of these domains is shown to be the envelope of holomorphy of f . / Science, Faculty of / Mathematics, Department of / Graduate
23

Surface and volumetric parametrisation using harmonic functions in non-convex domains

Klein, Richard 29 July 2013 (has links)
A Dissertation submitted to the Faculty of Science, University of the Witwatersrand, in fulfillment of the requirements for the degree of Master of Science. Johannesburg, 2013 / Many of the problems in mathematics have very elegant solutions. As complex, real–world geometries come into play, however, this elegance is often lost. This is particularly the case with meshes of physical, real–world problems. Domain mapping helps to move problems from some geometrically complex domain to a regular, easy to use domain. Shape transformation, specifically, allows one to do this in 2D domains where mesh construction can be difficult. Numerical methods usually work over some mesh on the target domain. The structure and detail of these meshes affect the overall computation and accuracy immensely. Unfortunately, building a good mesh is not always a straight forward task. Finite Element Analysis, for example, typically requires 4–10 times the number of tetrahedral elements to achieve the same accuracy as the corresponding hexahedral mesh. Constructing this hexahedral mesh, however, is a difficult task; so in practice many people use tetrahedral meshes instead. By mapping the geometrically complex domain to a regular domain, one can easily construct elegant meshes that bear useful properties. Once a domain has been mapped to a regular domain, the mesh can be constructed and calculations can be performed in the new domain. Later, results from these calculations can be transferred back to the original domain. Using harmonic functions, source domains can be parametrised to spaces with many different desired properties. This allows one to perform calculations that would be otherwise expensive or inaccurate. This research implements and extends the methods developed in Voruganti et al. [2006 2008] for domain mapping using harmonic functions. The method was extended to handle cases where there are voids in the source domain, allowing the user to map domains that are not topologically equivalent to the equivalent dimension hypersphere. This is accomplished through the use of various boundary conditions as the void is mapped to the target domains which allow the user to reshape and shrink the void in the target domain. The voids can now be reduced to arcs, radial lines and even shrunk to single points. The algorithms were implemented in two and three dimensions and ultimately parallelised to run on the Centre for High Performance Computing clusters. The parallel code also allows for arbitrary dimension genus-0 source domains. Finally, applications, such as remeshing and robot path planning were investigated and illustrated.
24

Convexity and duality in optimization theory

Young, Stephen K January 1977 (has links)
Thesis. 1977. Ph.D.--Massachusetts Institute of Technology. Dept. of Mathematics. / MICROFICHE COPY AVAILABLE IN ARCHIVES AND SCIENCE. / Bibliography: leaves 270-272. / by Stephen Kinyon Young. / Ph.D.
25

Aspects of delta-convexity /

Duda, Jakub, January 2003 (has links)
Thesis (Ph. D.)--University of Missouri-Columbia, 2003. / Typescript. Vita. Includes bibliographical references (leaves 83-89). Also available on the Internet.
26

Aspects of delta-convexity

Duda, Jakub, January 2003 (has links)
Thesis (Ph. D.)--University of Missouri-Columbia, 2003. / Typescript. Vita. Includes bibliographical references (leaves 83-89). Also available on the Internet.
27

Some results in the area of generalized convexity and fixed point theory of multi-valued mappings / Andrew C. Eberhard

Eberhard, A. C. January 1985 (has links)
Author's `Characterization of subgradients: 1` (31 leaves) in pocket / Bibliography: leaves 229-231 / 231 leaves : 1 port ; 30 cm. / Title page, contents and abstract only. The complete thesis in print form is available from the University Library. / Thesis (Ph.D.)--University of Adelaide, 1986
28

Efficient algorithms for the maximum convex sum problem : a thesis submitted in partial fulfilment of the requirements for the degree of Master of Science in Computer Science and Software Engineering in the University of Canterbury /

Thaher, Mohammed. January 2009 (has links)
Thesis (M. Sc.)--University of Canterbury, 2009. / Typescript (photocopy). "5th February 2009." Includes bibliographical references (leaves 66-69). Also available via the World Wide Web.
29

Applications of accuracy certificates for problems with convex structure

Cox, Bruce 21 February 2011 (has links)
Applications of accuracy certificates for problems with convex structure   This dissertation addresses the efficient generation and potential applications of accuracy certificates in the framework of “black-box-represented” convex optimization problems - convex problems where the objective and the constraints are represented by  “black boxes” which, given on input a value x of the argument, somehow (perhaps in a fashion unknown to the user) provide on output the values and the derivatives of the objective and the constraints at x. The main body of the dissertation can be split into three parts.  In the first part, we provide our background --- state of the art of the theory of accuracy certificates for black-box-represented convex optimization. In the second part, we extend the toolbox of black-box-oriented convex optimization algorithms with accuracy certificates by equipping with these certificates a state-of-the-art algorithm for large-scale nonsmooth black-box-represented problems with convex structure, specifically, the Non-Euclidean Restricted Memory Level (NERML) method. In the third part, we present several novel academic applications of accuracy certificates. The dissertation is organized as follows: In Chapter 1, we motivate our research goals and present a detailed summary of our results. In Chapter 2, we outline the relevant background, specifically, describe four generic black-box-represented generic problems with convex structure (Convex Minimization, Convex-Concave Saddle Point, Convex Nash Equilibrium, and Variational Inequality with Monotone Operator), and outline the existing theory of accuracy certificates for these problems. In Chapter 3, we develop techniques for equipping with on-line accuracy certificates the state-of-the-art NERML algorithm for large-scale nonsmooth problems with convex structure, both in the cases when the domain of the problem is a simple solid and in the case when the domain is given by Separation oracle. In Chapter 4, we develop  several novel academic applications of accuracy certificates, primarily to (a) efficient certifying emptiness of the intersection of finitely many solids given by Separation oracles, and (b) building efficient algorithms for convex minimization over solids given by Linear Optimization oracles (both precise and approximate). In Chapter 5, we apply accuracy certificates to efficient decomposition of “well structured” convex-concave saddle point problems, with applications to computationally attractive decomposition of a large-scale LP program with the constraint matrix which becomes block-diagonal after eliminating a relatively small number of possibly dense columns (corresponding to “linking variables”) and possibly dense rows (corresponding to “linking constraints”).
30

Isometry and convexity in dimensionality reduction

Vasiloglou, Nikolaos 30 March 2009 (has links)
The size of data generated every year follows an exponential growth. The number of data points as well as the dimensions have increased dramatically the past 15 years. The gap between the demand from the industry in data processing and the solutions provided by the machine learning community is increasing. Despite the growth in memory and computational power, advanced statistical processing on the order of gigabytes is beyond any possibility. Most sophisticated Machine Learning algorithms require at least quadratic complexity. With the current computer model architecture, algorithms with higher complexity than linear O(N) or O(N logN) are not considered practical. Dimensionality reduction is a challenging problem in machine learning. Often data represented as multidimensional points happen to have high dimensionality. It turns out that the information they carry can be expressed with much less dimensions. Moreover the reduced dimensions of the data can have better interpretability than the original ones. There is a great variety of dimensionality reduction algorithms under the theory of Manifold Learning. Most of the methods such as Isomap, Local Linear Embedding, Local Tangent Space Alignment, Diffusion Maps etc. have been extensively studied under the framework of Kernel Principal Component Analysis (KPCA). In this dissertation we study two current state of the art dimensionality reduction methods, Maximum Variance Unfolding (MVU) and Non-Negative Matrix Factorization (NMF). These two dimensionality reduction methods do not fit under the umbrella of Kernel PCA. MVU is cast as a Semidefinite Program, a modern convex nonlinear optimization algorithm, that offers more flexibility and power compared to iv KPCA. Although MVU and NMF seem to be two disconnected problems, we show that there is a connection between them. Both are special cases of a general nonlinear factorization algorithm that we developed. Two aspects of the algorithms are of particular interest: computational complexity and interpretability. In other words computational complexity answers the question of how fast we can find the best solution of MVU/NMF for large data volumes. Since we are dealing with optimization programs, we need to find the global optimum. Global optimum is strongly connected with the convexity of the problem. Interpretability is strongly connected with local isometry1 that gives meaning in relationships between data points. Another aspect of interpretability is association of data with labeled information. The contributions of this thesis are the following: 1. MVU is modified so that it can scale more efficient. Results are shown on 1 million speech datasets. Limitations of the method are highlighted. 2. An algorithm for fast computations for the furthest neighbors is presented for the first time in the literature. 3. Construction of optimal kernels for Kernel Density Estimation with modern convex programming is presented. For the first time we show that the Leave One Cross Validation (LOOCV) function is quasi-concave. 4. For the first time NMF is formulated as a convex optimization problem 5. An algorithm for the problem of Completely Positive Matrix Factorization is presented. 6. A hybrid algorithm of MVU and NMF the isoNMF is presented combining advantages of both methods. 7. The Isometric Separation Maps (ISM) a variation of MVU that contains classification information is presented. 8. Large scale nonlinear dimensional analysis on the TIMIT speech database is performed. 9. A general nonlinear factorization algorithm is presented based on sequential convex programming. Despite the efforts to scale the proposed methods up to 1 million data points in reasonable time, the gap between the industrial demand and the current state of the art is still orders of magnitude wide.

Page generated in 0.0498 seconds