Spelling suggestions: "subject:"convex"" "subject:"konvex""
281 |
Voter Compatibility In Interval SocietiesCarlson, Rosalie J 01 April 2013 (has links)
In an interval society, voters are represented by intervals on the real line, corresponding to their approval sets on a linear political spectrum. I imagine the society to be a representative democracy, and ask how to choose members of the society as representatives. Following work in mathematical psychology by Coombs and others, I develop a measure of the compatibility (political similarity) of two voters. I use this measure to determine the popularity of each voter as a candidate. I then establish local “agreeability” conditions and attempt to find a lower bound for the popularity of the best candidate. Other results about certain special societies are also obtained
|
282 |
Geometric Computing over Uncertain DataZhang, Wuzhou January 2015 (has links)
<p>Entering the era of big data, human beings are faced with an unprecedented amount of geometric data today. Many computational challenges arise in processing the new deluge of geometric data. A critical one is data uncertainty: the data is inherently noisy and inaccuracy, and often lacks of completeness. The past few decades have witnessed the influence of geometric algorithms in various fields including GIS, spatial databases, and computer vision, etc. Yet most of the existing geometric algorithms are built on the assumption of the data being precise and are incapable of properly handling data in the presence of uncertainty. This thesis explores a few algorithmic challenges in what we call geometric computing over uncertain data.</p><p>We study the nearest-neighbor searching problem, which returns the nearest neighbor of a query point in a set of points, in a probabilistic framework. This thesis investigates two different nearest-neighbor formulations: expected nearest neighbor (ENN), where we consider the expected distance between each input point and a query point, and probabilistic nearest neighbor (PNN), where we estimate the probability of each input point being the nearest neighbor of a query point.</p><p>For the ENN problem, we consider a probabilistic framework in which the location of each input point and/or query point is specified as a probability density function and the goal is to return the point that minimizes the expected distance. We present methods for computing an exact ENN or an \\eps-approximate ENN, for a given error parameter 0 < \\eps < 1, under different distance functions. These methods build an index of near-linear size and answer ENN queries in polylogarithmic or sublinear time, depending on the underlying function. As far as we know, these are the first nontrivial methods for answering exact or \\eps-approximate ENN queries with provable performance guarantees. Moreover, we extend our results to answer exact or \\eps-approximate k-ENN queries. Notably, when only the query points are uncertain, we obtain state-of-the-art results for top-k aggregate (group) nearest-neighbor queries in the L1 metric using the weighted SUM operator.</p><p>For the PNN problem, we consider a probabilistic framework in which the location of each input point is specified as a probability distribution function. We present efficient algorithms for (i) computing all points that are nearest neighbors of a query point with nonzero probability; (ii) estimating, within a specified additive error, the probability of a point being the nearest neighbor of a query point; (iii) using it to return the point that maximizes the probability being the nearest neighbor, or all the points with probabilities greater than some threshold to be the nearest neighbor. We also present some experimental results to demonstrate the effectiveness of our approach.</p><p>We study the convex-hull problem, which asks for the smallest convex set that contains a given point set, in a probabilistic setting. In our framework, the uncertainty of each input point is described by a probability distribution over a finite number of possible locations including a null location to account for non-existence of the point. Our results include both exact and approximation algorithms for computing the probability of a query point lying inside the convex hull of the input, time-space tradeoffs for the membership queries, a connection between Tukey depth and membership queries, as well as a new notion of \\beta-hull that may be a useful representation of uncertain hulls.</p><p>We study contour trees of terrains, which encode the topological changes of the level set of the height value \\ell as we raise \\ell from -\\infty to +\\infty on the terrains, in a probabilistic setting. We consider a terrain that is defined by linearly interpolating each triangle of a triangulation. In our framework, the uncertainty lies in the height of each vertex in the triangulation, and we assume that it is described by a probability distribution. We first show that the probability of a vertex being a critical point, and the expected number of nodes (resp. edges) of the contour tree, can be computed exactly efficiently. Then we present efficient sampling-based methods for estimating, with high probability, (i) the probability that two points lie on an edge of the contour tree, within additive error; (ii) the expected distance of two points p, q and the probability that the distance of p, q is at least \\ell on the contour tree, within additive error and/or relative error, where the distance of p, q on a contour tree is defined to be the difference between the maximum height and the minimum height on the unique path from p to q on the contour tree.</p> / Dissertation
|
283 |
Parallel magnetic resonance imaging reconstruction problems using wavelet representations / Problèmes de reconstruction en imagerie par résonance magnétique parallèle à l'aide de représentations en ondelettesChaari, Lotfi 05 November 2010 (has links)
Pour réduire le temps d'acquisition ou bien améliorer la résolution spatio-temporelle dans certaines application en IRM, de puissantes techniques parallèles utilisant plusieurs antennes réceptrices sont apparues depuis les années 90. Dans ce contexte, les images d'IRM doivent être reconstruites à partir des données sous-échantillonnées acquises dans le « k-space ». Plusieurs approches de reconstruction ont donc été proposées dont la méthode SENSitivity Encoding (SENSE). Cependant, les images reconstruites sont souvent entâchées par des artéfacts dus au bruit affectant les données observées, ou bien à des erreurs d'estimation des profils de sensibilité des antennes. Dans ce travail, nous présentons de nouvelles méthodes de reconstruction basées sur l'algorithme SENSE, qui introduisent une régularisation dans le domaine transformé en ondelettes afin de promouvoir la parcimonie de la solution. Sous des conditions expérimentales dégradées, ces méthodes donnent une bonne qualité de reconstruction contrairement à la méthode SENSE et aux autres techniques de régularisation classique (e.g. Tikhonov). Les méthodes proposées reposent sur des algorithmes parallèles d'optimisation permettant de traiter des critères convexes, mais non nécessairement différentiables contenant des a priori parcimonieux. Contrairement à la plupart des méthodes de reconstruction qui opèrent coupe par coupe, l'une des méthodes proposées permet une reconstruction 4D (3D + temps) en exploitant les corrélations spatiales et temporelles. Le problème d'estimation d'hyperparamètres sous-jacent au processus de régularisation a aussi été traité dans un cadre bayésien en utilisant des techniques MCMC. Une validation sur des données réelles anatomiques et fonctionnelles montre que les méthodes proposées réduisent les artéfacts de reconstruction et améliorent la sensibilité/spécificité statistique en IRM fonctionnelle / To reduce scanning time or improve spatio-temporal resolution in some MRI applications, parallel MRI acquisition techniques with multiple coils have emerged since the early 90's as powerful methods. In these techniques, MRI images have to be reconstructed from acquired undersampled « k-space » data. To this end, several reconstruction techniques have been proposed such as the widely-used SENSitivity Encoding (SENSE) method. However, the reconstructed images generally present artifacts due to the noise corrupting the observed data and coil sensitivity profile estimation errors. In this work, we present novel SENSE-based reconstruction methods which proceed with regularization in the complex wavelet domain so as to promote the sparsity of the solution. These methods achieve accurate image reconstruction under degraded experimental conditions, in which neither the SENSE method nor standard regularized methods (e.g. Tikhonov) give convincing results. The proposed approaches relies on fast parallel optimization algorithms dealing with convex but non-differentiable criteria involving suitable sparsity promoting priors. Moreover, in contrast with most of the available reconstruction methods which proceed by a slice by slice reconstruction, one of the proposed methods allows 4D (3D + time) reconstruction exploiting spatial and temporal correlations. The hyperparameter estimation problem inherent to the regularization process has also been addressed from a Bayesian viewpoint by using MCMC techniques. Experiments on real anatomical and functional data show that the proposed methods allow us to reduce reconstruction artifacts and improve the statistical sensitivity/specificity in functional MRI
|
284 |
Dynamical system decomposition and analysis using convex optimizationAnderson, James David January 2012 (has links)
This thesis is concerned with investigating new methods for the analysis of large-scale dynamical systems using convex optimization. The proposed methodology is based on composite Lyapunov theory and is computationally implemented using polynomial programming techniques. The main result of this work is the development of a system decomposition framework that makes it possible to analyze systems that are of such a scale that traditional methods cannot cope with. We begin by addressing the problem of model invalidation. A barrier certificate method for invalidating models in the presence of uncertain data is presented for both continuous and discrete time models. It is shown how a re-parameterization of the time dependent variables can improve the numerical conditioning of the underlying optimization problem. The main contribution of this thesis is the development of an automated dynamical system decomposition framework that permits us to verify the stability of systems that typically have a state dimension large enough to render traditional computational methods intractable. The underlying idea is to decompose a system into a set of lower order subsystems connected in feedback in such a manner that composite methods for stability verification may be employed. What is unique about the algorithm presented is that it takes into account both dynamics and the topology of the interconnection graph. In the first instance we illustrate the methodology with an ecological network and primal Internet congestion control scheme. The versatility of the decomposition framework is also highlighted when it is shown that when applied to a model of the EGF-MAPK signaling pathway it is capable of identifying biologically relevant subsystems in addition to stability verification. Finally we introduce stability metrics for interconnected dynamical systems based on the theory of dissipativity. We conclude by outlining a clustering based decomposition algorithm that explicitly takes into account the input and output dynamics when determining the system decomposition.
|
285 |
Convexity-Preserving Scattered Data InterpolationLeung, Nim Keung 12 1900 (has links)
Surface fitting methods play an important role in many scientific fields as well as in computer aided geometric design. The problem treated here is that of constructing a smooth surface that interpolates data values associated with scattered nodes in the plane. The data is said to be convex if there exists a convex interpolant. The problem of convexity-preserving interpolation is to determine if the data is convex, and construct a convex interpolant if it exists.
|
286 |
Computational and Statistical Advances in Testing and LearningRamdas, Aaditya Kumar 01 July 2015 (has links)
This thesis makes fundamental computational and statistical advances in testing and estimation, making critical progress in theory and application of classical statistical methods like classification, regression and hypothesis testing, and understanding the relationships between them. Our work connects multiple fields in often counter-intuitive and surprising ways, leading to new theory, new algorithms, and new insights, and ultimately to a cross-fertilization of varied fields like optimization, statistics and machine learning. The first of three thrusts has to do with active learning, a form of sequential learning from feedback-driven queries that often has a provable statistical advantage over passive learning. We unify concepts from two seemingly different areas—active learning and stochastic firstorder optimization. We use this unified view to develop new lower bounds for stochastic optimization using tools from active learning and new algorithms for active learning using ideas from optimization. We also study the effect of feature noise, or errors-in-variables, on the ability to actively learn. The second thrust deals with the development and analysis of new convex optimization algorithms for classification and regression problems. We provide geometrical and convex analytical insights into the role of the margin in margin-based classification, and develop new greedy primal-dual algorithms for non-linear classification. We also develop a unified proof for convergence rates of randomized algorithms for the ordinary least squares and ridge regression problems in a variety of settings, with the purpose of investigating which algorithm should be utilized in different settings. Lastly, we develop fast state-of-the-art numerically stable algorithms for an important univariate regression problem called trend filtering with a wide variety of practical extensions. The last thrust involves a series of practical and theoretical advances in nonparametric hypothesis testing. We show that a smoothedWasserstein distance allows us to connect many vast families of univariate and multivariate two sample tests. We clearly demonstrate the decreasing power of the families of kernel-based and distance-based two-sample tests and independence tests with increasing dimensionality, challenging existing folklore that they work well in high dimensions. Surprisingly, we show that these tests are automatically adaptive to simple alternatives and achieve the same power as other direct tests for detecting mean differences. We discover a computation-statistics tradeoff, where computationally more expensive two-sample tests have a provable statistical advantage over cheaper tests. We also demonstrate the practical advantage of using Stein shrinkage for kernel independence testing at small sample sizes. Lastly, we develop a novel algorithmic scheme for performing sequential multivariate nonparametric hypothesis testing using the martingale law of the iterated logarithm to near-optimally control both type-1 and type-2 errors. One perspective connecting everything in this thesis involves the closely related and fundamental problems of linear regression and classification. Every contribution in this thesis, from active learning to optimization algorithms, to the role of the margin, to nonparametric testing fits in this picture. An underlying theme that repeats itself in this thesis, is the computational and/or statistical advantages of sequential schemes with feedback. This arises in our work through comparing active with passive learning, through iterative algorithms for solving linear systems instead of direct matrix inversions, and through comparing the power of sequential and batch hypothesis tests.
|
287 |
Extremal Polyominoes / Extremal PolyominoesSteffanová, Veronika January 2015 (has links)
Title: Extremal Polyominoes Author: Veronika Steffanová Department: Department of Applied Mathematics Supervisor: Doc. RNDr. Pavel Valtr, Dr. Abstract: The thesis is focused on polyominoes and other planar figures consisting of regular polygons, namely polyiamonds and polyhexes. We study the basic geometrical properties: the perimeter, the convex hull and the bounding rectangle/hexagon. We maximise and minimise these parameters and for the fixed size of the polyomino, denoted by n. We compute the extremal values of a chosen parameter and then we try to enumerate all polyominoes of the size n, which has the extremal property. Some of the problems were solved by other authors. We summarise their results. Some of the problems were solved by us, namely the maximal bounding rectan- gle/hexagon and maximal convex hull of polyiamonds. There are still sev- eral topics which remain open. We summarise the literature and offer our observations for the following scientists. Keywords: Polyomino, convex hull, extremal questions, plane 1
|
288 |
Náhodné uzavřené množiny a procesy částic / Random closed sets and particle processesStroganov, Vladimír January 2014 (has links)
In this thesis we are concerned with representation of random closed sets in Rd with values concentrated on a space UX of locally finite unions of sets from a given class X ⊂ F. We examine existence of their repre- sentations with particle processes on the same space X, which keep invariance to rigid motions, which the initial random set was invariant to. We discuss existence of such representations for selected practically applicable spaces X: we go through the known results for convex sets and introduce new proofs for cases of sets with positive reach and for smooth k-dimensional submanifolds. Beside that we present series of general results related to representation of random UX sets. 1
|
289 |
Inégalités géométriques et fonctionnelles / Geometric and Functional InequalitiesLehec, Joseph 03 December 2008 (has links)
La majeure partie de cette thèse est consacrée à l'inégalité de Blaschke-Santaló, qui s'énonce ainsi : parmi les ensembles symétriques, la boule euclidienne maximise le produit vol(K) vol(K°), K° désignant le polaire de K. Il existe des versions fonctionnelles de cette inégalité, découvertes par plusieurs auteurs (Ball, Artstein, Klartag, Milman, Fradelizi, Meyer. . .), mais elles sont toutes dérivées de l'inégalité ensembliste. L'objet de cette thèse est de proposer des démonstrations directes de ces inégalités fonctionnelles. On obtient ainsi de nouvelles preuves de l'inégalité de Santaló, parfois très simples. La dernière partie est un peu à part et concerne le chaos gaussien : on démontre une majoration précise des moments du chaos gaussien due à Lataªa par des arguments de chaînage à la Talagrand / This thesis is mostly about the Blaschke-Santaló inequality, which states that among symmetric sets, the Euclidean ball maximises the product vol(K) vol(K°), where K° is the polar body of K. Several authors (Ball, Artstein, Klartag, Milman, Fradelizi, Meyer. . .) were able to derive functional inequalities from this inequality. The purpose of this thesis is to give direct proofs of these functional Santaló inequalities. This provides new proofs of Santaló, some of which are very simple. The last chapter is about Gaussian chaoses. We obtain a sharp bound for moments of Gaussian chaoses due to Lataªa, using the generic chaining of Talagrand
|
290 |
Convex optimization for cosegmentation / Optimisation convexe pour la cosegmentationJoulin, Armand 17 December 2012 (has links)
La simplicité apparente avec laquelle un humain perçoit ce qui l'entoure suggère que le processus impliqué est en partie mécanique, donc ne nécessite pas un haut degré de réflexion. Cette observation suggère que notre perception visuelle du monde peut être simulée sur un ordinateur. La vision par ordinateur est le domaine de recherche consacré au problème de la création d'une forme de perception visuelle pour des ordinateurs. La puissance de calcul des ordinateurs des années 50 ne permettait pas de traiter et d'analyser les données visuelles nécessaires à l'élaboration d'une perception visuelle virtuelle. Depuis peu, la puissance de calcul et la capacité de stockage ont permis à ce domaine de vraiment émerger. En deux décennies, la vision par ordinateur a permis de répondre à problèmes pratiques ou industrielles comme la détection des visages, de personnes au comportement suspect dans une foule ou de défauts de fabrication dans des chaînes de production. En revanche, en ce qui concerne l'émergence d'une perception visuelle virtuelle non spécifique à une tâche donnée, peu de progrès ont été réalisés et la communauté est toujours confrontée à des problèmes fondamentaux. Un de ces problèmes est de segmenter un stimuli optique ou une image en régions porteuses de sens, en objets ou actions. La segmentation de scène est naturelle pour les humains, mais aussi essentielle pour comprendre pleinement son environnement. Malheureusement elle est aussi extrêmement difficile à reproduire sur un ordinateur car il n'existe pas de définition claire de la région "significative''. En effet, en fonction de la scène ou de la situation, une région peut avoir des interprétations différentes. Etant donnée une scène se passant dans la rue, on peut considérer que distinguer un piéton est important dans cette situation, par contre ses vêtements ne le semblent pas nécessairement. Si maintenant nous considérons une scène ayant lieu pendant un défilé de mode, un vêtement devient un élément important, donc une région significative. Ici, nous nous concentrons sur ce problème de segmentation et nous l'abordons sous un angle particulier pour éviter cette difficulté fondamentale. Nous considérerons la segmentation comme un problème d'apprentissage faiblement supervisé, c'est-à-dire qu'au lieu de segmenter des images selon une certaine définition prédéfinie de régions "significatives'', nous développons des méthodes permettant de segmenter simultanément un ensemble d'images en régions qui apparaissent régulièrement. Nous définissons donc une région "significative'' d'un point de vue statistique: Ce sont les régions qui apparaissent régulièrement dans l'ensemble des images données. Pour cela nous concevons des modèles ayant une portée qui va au-delà de l'application à la vision. Notre approche prend ses racines dans l'apprentissage statistique, dont l'objectif est de concevoir des méthodes efficaces pour extraire et/ou apprendre des motifs récurrents dans des jeux de données. Ce domaine a récemment connu une forte popularité en raison de l'augmentation du nombre et de la taille des bases de données disponibles. Nous nous concentrons ici sur des méthodes conçues pour découvrir l'information "cachée'' dans une base à partir d'annotations incomplètes ou inexistantes. Enfin, nos travaux prennent racine dans le domaine de l'optimisation numérique afin d'élaborer des algorithmes efficaces et adaptés à nos problèmes. En particulier, nous utilisons et adaptons des outils récemment développés afin de relaxer des problèmes combinatoires complexes en des problèmes convexes pour lesquels il est garanti de trouver la solution optimale. Nous illustrons la qualité de nos formulations et algorithmes aussi sur des problèmes tirés de domaines autres que la vision par ordinateur. En particulier, nous montrons que nos travaux peuvent être utilisés dans la classification de texte et en biologie cellulaire. / People and most animals have a natural ability to see the world and understand it effortlessly. The apparent simplicity of this task suggests that this ability is, to some extend, mechanical, i.e., does not require high level thinking or profound reasoning. This observation suggests that this visual perception of the world should be reproducible on a mechanical device such as a computer. Computer vision is the field of research dedicated to creating a form of visual perception on computers. The first work on computer vision dates from the 50's but the amount of power needed for treating and analyzing visual data was not available at this time. It is only recently that improvements in computer power and storage capacities, have permitted this field to really emerge. On the one hand, constant progress in computer vision has allowed to develop dedicated solutions to practical or industrial problems. Detecting human faces, tracking people in crowded areas or default in production chains are industrial applications where computer vision is used. On the other hand, when it comes to creating a general visual perception for computers, it is probably fair to say that less progress has been made, and the community is still struggling with fundamental problems. One of these problems is to reproduce our ability of grouping into meaningful regions, the visual input data recorded by an optical device. This procedure, called segmentation, separates a scene into meaningful entities (e.g., objects or actions). Segmentation seems not only natural but essential for people to fully understand a given scene, but it is still very challenging for a computer. One reason is the difficulty of clearly identify what ``meaningful'' should be, i.e., depending on the scene or the situation, a region may have different interpretations. In this thesis, we will focus on the segmentation task and will try to avoid this fundamental difficulty by considering segmentation as a weakly supervised learning problem. Instead of segmenting images according to some predefined definition of ``meaningful'' regions, we develop methods to segment multiple images jointly into entities that repeatedly appear across the set of images. In other words, we define ``meaningful'' regions from a statistical point of view: they are regions that appears frequently in a dataset, and we design procedures to discover them. This leads us to design models whose a scope goes beyond this application to vision. Our approach takes its roots in the field of machine learning, whose goal is to design efficient methods to retrieve and/or learn common patterns in data. The field of machine learning has also gained in popularity in the last decades due to the recent improvement in computer power and the ever growing size of databases now available. In this thesis, we focus on methods tailored to retrieving hidden information from poorly annotated data, i.e., with incomplete or partial annotations. In particular, given a specific segmentation task defined by a set of images, we aim at segmenting the images and learn a related model as to segment unannotated images. Finally, our research drives us to explore the field of numerical optimization so as to design algorithms especially tailored for our problems. In particular, many numerical problems considered in this thesis cannot be solved by off-the-shelf software because of the complexity of their formulation. We use and adapt recently developed tools to approximate problems by solvable ones. We illustrate the promise of our formulations and algorithms on other general applications in different fields beside computer vision. In particular, we show that our work may also be used in text classification and discovery of cell configurations.
|
Page generated in 0.0238 seconds