Spelling suggestions: "subject:"convex hall"" "subject:"convex hill""
1 |
An Approximate MCMC Method for Convex HullsWang, Pengfei 20 August 2019 (has links)
Markov chain Monte Carlo (MCMC) is an extremely popular class of algorithms for
computing summaries of posterior distributions. One problem for MCMC in the so-called Big Data regime is the growing computational cost of most MCMC algorithms. Most popular and basic MCMC algorithms, like Metropolis-Hastings algorithm (MH) and Gibbs algorithm, have to take the full data set into account in every iteration. In Big Data case, it is a fact that datasets of more than 100 GB are now fairly common. The running time of standard MCMC on such large datasets is prohibitively long.
To solve this problem, some papers develop algorithms that use only a subset of the data at each step to obtain an approximate or exact posterior distribution. Korattikara et al (2013) merely estimates the transition probabilities of a typical MH chain using a subset of the data at each step of the chain, with some controllable error. The FireFly Monte Carlo (FLYMC) algorithm, presented by Maclaurin and Adams, augments the original dataset and only explicitly evaluates an “active" subset in each step. They show that the marginal distribution of the FLYMC algorithm at stationarity in fact still equal to the posterior distribution of interest. However, Both of the above two papers and other literature in this thesis are restrained to a special kind of posteriors with "product-form" likelihoods. Such posteriors require all data points are conditionally independent and under the same likelihood.
However, what problem we want to solve is targeting a uniform distribution on a convex hull. In this case, \product-form" is not applicable. The reason why we focus on this problem is in statistics we sometimes face the problem to compute the volume of distributions which have a convex hull shape or their shape is able to transformed into a convex hull. It is impossible to compute via decomposing and reducing convex hulls of high dimension. According to Barany et al in 1987, the ratio of the estimated upper and lower bound of the volume of a certain convex hull is quite big. It is not possible to estimate the volume well, either. Fast-mixing Markov chains are basically the only way to actually do volume computations.
The initial work in this thesis is to de ne a data-augmentation algorithm along the lines of FLYMC. We also introduce an auxiliary random variable to mark subsets. However, as our situation is more complicated, we also have one more variable to help selecting subsets than FLYMC algorithm. For the extra variable, we utilize pseudo-marginal algorithm (PMMH), which allows us to replace interest parameter's distribution conditional on augmented variable by an estimator. Although our algorithm is not a standard case because our estimator is biased, bounds of the individual approximating measure of the parameter of interest is able to be directly translated into bounds of the error in the stationary measure of the algorithm.
After fi nishing an implementable algorithm, we then use two tricks including Locality
Sensitive Hash function (LSH) and Taylor's expansion to improve the original algorithm. LSH helps raise the e ciency of proposing new samples of the augmented variable. Taylor's expansion is able to produce a more accurate estimator of the parameter of interest.
Our main theoretical result is a bound on the pointwise bias of our estimator, which
results in a bound on the total error of the chain's stationary measure. We prove the total error will converge under a certain condition. Our simulation results illustrate this, and we use a large collection of simulations to illustrate some tips on how to choose parameters and length of chains in real cases.
|
2 |
The Convex Hull of Two Core Capacitated Network Design ProblemsMagnanti, Thomas L., Mirchandani, Prakash, Vachani, Rita 06 1900 (has links)
The network loading problem (NLP) is a specialized capacitated network design problem in which prescribed point-to-point demand between various pairs of nodes of a network must be met by installing (loading) a capacitated facility. We can load any number of units of the facility on each of the arcs at a specified arc dependent cost. The problem is to determine the number of facilities to be loaded on the arcs that will satisfy the given demand at minimum cost. This paper studies two core subproblems of the NLP. The first problem, motivated by a Lagrangian relaxation approach for solving the problem, considers a multiple commodity, single arc capacitated network design problem. The second problem is a three node network; this specialized network arises in larger networks if we aggregate nodes. In both cases, we develop families of facets and completely characterize the convex hull of feasible solutions to the integer programming formulation of the problems. These results in turn strengthen the formulation of the NLP.
|
3 |
Classifcation of Conics in the Tropical Projective PlaneEllis, Amanda 18 November 2005 (has links)
This paper defines tropical projective space, TP^n, and the tropical general linear group TPGL(n). After discussing some simple examples of tropical polynomials and their hypersurfaces, a strategy is given for finding all conics in the tropical projective plane. The classification of conics and an analysis of the coefficient space corresponding to such conics is given.
|
4 |
Convex Modeling Techniques for Aircraft ControlKumar, Abhishek 20 June 2000 (has links)
The need to design a controller that self-schedules itself during the flight of an aircraft has been an active area of research. New methods have been developed beyond the traditional gain-scheduling approach. One such design method leads to a linear parameter varying (LPV) controller that changes based on the real-time variation of system dynamics. Before such a controller can be designed, the system has to also be represented as an LPV system. The current effort proposes a LPV modeling technique that is inspired by an affine LPV modeling techniques found in recent research. The properties of the proposed modeling method are investigated and compared to the affine modeling technique. It is shown that the proposed modeling technique represents the actual system behavior more closely than the existing affine modeling technique.
To study the effect of the two LPV modeling techniques on controller design, a linear quadratic regulator (LQR) controller using linear matrix inequality (LMI) formulation is designed. This control design method provides a measure of conservatism that is used to compare the controllers based on the different modeling techniques. An F-16 short-period model is used to implement the modeling techniques and design the controllers. It was found that the controller based on the proposed LPV modeling method is less conservative than the controller based on the existing LPV method. Interesting features of LMI formulation for multiple plant models were also discovered during the exercise.
A stability robustness analysis was also conducted as an additional comparison of the performance of the controllers designed using the two modeling methods. A scalar measure, called the probability of instability, is used as a measure of robustness. It was found that the controller based on the proposed modeling technique has the necessary robustness properties even though it is less conservative than the controller designed based on the existing modeling approach. / Master of Science
|
5 |
Shortest Paths, Network Design and Associated PolyhedraMagnanti, Thomas L., Mirchandani, Prakash 04 1900 (has links)
We study a specialized version of network design problems that arise in telecommunication, transportation and other industries. The problem, a generalization of the shortest path problem, is defined on an undirected network consisting of a set of arcs on which we can install (load), at a cost, a choice of up to three types of capacitated facilities. Our objective is to determine the configuration of facilities to load on each arc that will satisfy the demand of a single commodity at the lowest possible cost. Our results (i) demonstrate that the single-facility loading problem and certain "common breakeven point" versions of the two-facility and three-facility loading problems are polynomially solvable as a shortest path problem; (ii) show that versions of the twofacility loading problem are strongly NP-hard, but that a shortest path solution provides an asymptotically "good" heuristic; and (iii) characterize the optimal solution (that is, specify a linear programming formulation with integer solutions) of the common breakeven point versions of the two-facility and three-facility loading problems. In this development, we introduce two new families of facets, give geometric interpretations of our results, and demonstrate the usefulness of partitioning the space of the problem parameters to establish polyhedral integrality properties. Generalizations of our results apply to (i) multicommodity applications and (ii) situations with more than three facilities.
|
6 |
A global optimization approach to pooling problems in refineriesPham, Viet 15 May 2009 (has links)
The pooling problem is an important optimization problem that is encountered in
operation and scheduling of important industrial processes within petroleum refineries.
The key objective of pooling is to mix various intermediate products to achieve desired
properties and quantities of products. First, intermediate streams from various processing
units are mixed and stored in intermediate tanks referred to as pools. The stored streams
in pools are subsequently allowed to mix to meet varying market demands. While these
pools enhance the operational flexibility of the process, they complicate the decisionmaking
process needed for optimization. The problem to find the least costly mixing
recipe from intermediate streams to pools and then from pools to sale products is
referred to as the pooling problem. The research objective is to contribute an approach to
solve this problem.
The pooling problem can be formulated as an optimization program whose objective is
to minimize cost or maximize profit while determining the optimal allocation of
intermediate streams to pools and the blending of pools to final products. Because of the
presence of bilinear terms, the resulting formulation is nonconvex which makes it very
difficult to attain the global solution. Consequently, there is a need to develop
computationally-efficient and easy-to-implement global-optimization techniques to solve
the pooling problem. In this work, a new approach is introduced for the global
optimization of pooling problems. The approach is based on three concepts: linearization
by discretizing nonlinear variables, pre-processing using implicit enumeration of the
discretization to form a convex-hull which limits the size of the search space, and
application of integer cuts to ensure compatibility between the original problem and the discretized formulation. The continuous quality variables contributing to bilinear terms
are first discretized. The discretized problem is a mixed integer linear program (MILP)
and can be globally solved in a computationally effective manner using branch and
bound method. The merits of the proposed approach are illustrated by solving test case
studies from literature and comparison with published results.
|
7 |
Minimum Perimeter Convex Hull of a Set of Line Segments: An ApproximationHassanzadeh, Farzad 09 December 2008 (has links)
The problem of finding the convex hull of a set of points in the plane is one of the fundamental and well-studied problems in Computational Geometry. However, for a set of imprecise points, the convex hull problem has not been thoroughly investigated. By imprecise points, we refer to a region in the plane inside which one point may lie. We are particularly interested in finding a minimum perimeter convex hull of a set of imprecise points, where the imprecise points are modelled as line segments. Currently, the best known algorithm that solves the minimum perimeter convex hull problem has an exponential running time in the worst case. It is still unknown whether this problem is NP-hard. We explore several approximation algorithms for this problem. Finally we propose a constant factor approximation algorithm that runs in O(nlogn) time. / Thesis (Master, Computing) -- Queen's University, 2008-11-28 14:47:15.169
|
8 |
Segmentação de sentenças manuscritas através de redes neurais artificiaisCARVALHO, César Augusto Mendonça de 31 January 2008 (has links)
Made available in DSpace on 2014-06-12T15:50:57Z (GMT). No. of bitstreams: 1
license.txt: 1748 bytes, checksum: 8a4605be74aa9ea9d79846c1fba20a33 (MD5)
Previous issue date: 2008 / O reconhecimento automático de textos manuscritos vem a cada dia ganhando importância
tanto no meio científico quanto no comercial. Como exemplos de aplicações, têm-se sistemas
bancários onde os campos de valor dos cheques são validados, aplicativos presentes nos correios
para leitura de endereço e código postal, e sistemas de indexação de documentos históricos.
A segmentação automática do texto em palavras ou caracteres é um dos primeiros passos
realizados pelos sistemas de reconhecimento dos textos manuscritos. Portanto, é essencial que
seja alcançado um bom desempenho de segmentação para que as etapas posteriores produzam
boas taxas de reconhecimento do texto manuscrito.
O presente trabalho trata do problema de segmentação de sentenças manuscritas em palavras
através de duas abordagens: (i) método baseado na métrica de distância Convex Hull com modificações
que objetivam melhorar o desempenho de segmentação; (ii) um novo método baseado
em Redes Neurais Artificiais que visa superar problemas existentes em outras técnicas de segmentação,
tais como: o uso de heurísticas e limitação de vocabulário.
O desempenho dos métodos de segmentação foi avaliado utilizando-se de uma base de
dados pública de texto manuscrito. Os resultados experimentais mostram que houve melhora de
desempenho das abordagens quando comparadas à abordagem tradicional baseada em distância
Convex Hull
|
9 |
Analyse de complexité d'enveloppes convexes aléatoires / Complexity analysis of random convex hullsThomasse, Rémy 18 December 2015 (has links)
Dans cette thèse, nous donnons de nouveaux résultats sur la taille moyenne d’enveloppes convexes de points choisis dans un convexe. Cette taille est connue lorsque les points sont choisis uniformément (et indépendamment) dans un polytope convexe, ou un convexe suffisamment «lisse» ; ou encore lorsque les points sont choisis indépendamment selon une loi normale centrée. Dans la première partie de cette thèse, nous développons une technique nous permettant de donner de nouveaux résultats lorsque les points sont choisis arbitrairement dans un convexe, puis «bruités» par une perturbation aléatoire. Ce type d’analyse, appelée analyse lissée, a initialement été développée par Spielman et Teng dans leur étude de l’algorithme du simplexe. Pour un ensemble de points arbitraires dans une boule, nous obtenons une borne inférieure et une borne supérieure de cette complexité lissée dans le cas de perturbations uniformes dans une boule en dimension arbitraire, ainsi que dans le cas de perturbations gaussiennes en dimension 2. La taille de l'enveloppe convexe de points choisis uniformément dans un convexe, peut avoir un comportement logarithmique si ce convexe est un polytope ou polynomial s’il est lisse. Nous construisons un convexe produisant un comportement oscillant entre ces deux extrêmes. Dans la dernière partie, nous présentons un algorithme pour engendrer efficacement une enveloppe convexe aléatoire de points choisis uniformément et indépendamment dans un disque sans avoir à engendrer explicitement tous les points. Il a été implémenté en C++ et intégré dans la bibliothèque CGAL. / In this thesis, we give some new results about the average size of convex hulls made of points chosen in a convex body. This size is known when the points are chosen uniformly (and independently) in a convex polytope or in a "smooth" enough convex body. This average size is also known if the points are independently chosen according to a centered Gaussian distribution. In the first part of this thesis, we introduce a technique that will give new results when the points are chosen arbitrarily in a convex body, and then noised by some random perturbations. This kind of analysis, called smoothed analysis, has been initially developed by Spielman and Teng in their study of the simplex algorithm. For an arbitrary set of point in a ball, we obtain a lower and a upper bound for this smoothed complexity, in the case of uniform perturbation in a ball (in arbitrary dimension) and in the case of Gaussian perturbations in dimension 2. The asymptotic behavior of the expected size of the convex hull of uniformly random points in a convex body is polynomial for a "smooth" body and polylogarithmic for a polytope. In the second part, we construct a convex body so that the expected size of the convex hull of points uniformly chosen in that body oscillates between these two behaviors when the number of points increases. In the last part, we present an algorithm to generate efficiently a random convex hull made of points chosen uniformly and independently in a disk. We also compute its average time and space complexity. This algorithm can generate a random convex hull without explicitly generating all the points. It has been implemented in C++ and integrated in the CGAL library.
|
10 |
On Resampling Schemes for Uniform PolytopesQi, Weinan January 2017 (has links)
The convex hull of a sample is used to approximate the support of the underlying
distribution. This approximation has many practical implications in real life. For
example, approximating the boundary of a finite set is used by many authors in environmental studies and medical research. To approximate the functionals of convex hulls, asymptotic theory plays a crucial role. Unfortunately, the asymptotic results are mostly very complicated. To address this complication, we suggest a consistent bootstrapping scheme for certain cases. Our resampling technique is used for both semi-parametric and non-parametric cases. Let X1,X2,...,Xn be a sequence of i.i.d. random points uniformly distributed on an unknown convex set. Our bootstrapping scheme relies on resampling uniformly from the convex hull of X1,X2,...,Xn. In this thesis, we study the asymptotic consistency of certain functionals of convex hulls. In particular, we apply our bootstrapping technique to the Hausdorff distance between the actual convex set and its estimator. We also provide a conjecture for the application of our bootstrapping scheme to Gaussian polytopes. Moreover, some other relevant consistency results for the regular bootstrap are developed.
|
Page generated in 0.0681 seconds