Spelling suggestions: "subject:"convexhull"" "subject:"convexhulls""
31 |
Two-Dimensional Convex Hull Algorithm Using the Iterative Orthant Scan / Tvådimensionell Convex Hull Algoritm Med Iterativ Orthant ScanFreimanis, Davis, Hongisto, Jonas January 2017 (has links)
Finding the minimal convex polygon on a set of points in space is of great interest in computer graphics and data science. Furthermore, doing this efficiently is financially preferable. This thesis explores how a novel variant of bucketing, called the Iterative orthant scan, can be applied to this problem. An algorithm implementing the Iterative orthant scan was created and implemented in C++ and its real-time and asymptotic performance was compared to the industry standard algorithms along with the traditional textbook convex hull algorithm. The worst-case time complexity was shown to be a logarithmic term worse than the best teoretical algorithm. The real-time performance was 47.3% better than the traditional algorithm and 66.7% worse than the industry standard for a uniform square distribution. The real-time performance was 61.1% better than the traditional algorithm and 73.4% worse than the industry standard for a uniform triangular distribution. For a circular distribution, the algorithm performed 89.6% worse than the traditional algorithm and 98.7% worse than the industry standard algorithm. The asymptotic performance improved for some of the distributions studied. Parallelization of the algorithm led to an average performance improvement of 35.9% across the tested distributions. Although the created algorithm exhibited worse than the industry standard algorithm, the algorithm performed better than the traditional algorithm in most cases and shows promise for parallelization. / Att hitta den minsta konvexa polygonen för en mängds punkter är av intresse i ett flertal områden inom datateknik. Att hitta denna polygon effektivt är av ekonomiskt intresse. Denna rapport utforskar hur metoden Iterative orthant scan kan appliceras på problemet att hitta detta konvexa hölje. En algoritm implementerades i C++ som utnyttjar denna metod och dess prestanda jämfördes i realtid såsom asymptotiskt mot den traditionella och den mest använda algoritmen. Den nya algoritmens asymptotiska värsta fall visades vara ett logaritmiskt gradtal sämre än den bästa teoretiska algoritmen. Realtidsprestandan av den nya algoritmen var 47,3% bättre än den traditionella algoritmen och 66,7% sämre än den mest använda algoritmen för fyrkantsdistribution av punkterna. Realtidsprestandan av den nya algoritmen var 61,1% bättre än den traditionella algoritmen och 73,4% sämre än den mest använda algoritmen för triangulär distribution av punkterna. För cirkulär distribution var vår algoritm 89,6% sämre än den traditionella algoritmen och 98,7% sämre än den vanligaste algoritmen. Parallellisering av vår algoritm ledde i medel till en förbättring på 35,9%. För vissa typer av fördelningar blev denna prestanda bättre. Även då algoritmen hade sämre prestanda än den vanligaste algoritmen, så är det ett lovande steg att prestandan var bättre än den traditionella algoritmen.
|
32 |
ON ALGORITHMS FOR THE COLOURFUL LINEAR PROGRAMMING FEASIBILITY PROBLEMRong, Guohong 10 1900 (has links)
<p>Given colourful sets S_1,..., S_{d+1} of points in R^d and a point p in R^d, the colourful linear programming problem is to express p as a convex combination of points x_1,...,x_{d+1} with x_i in S_i for each i. This problem was presented by Bárány and Onn in 1997, it is still not known if a polynomial-time algorithm for the problem exists. The monochrome version of this problem, expressing p as a convex combination of points in a set S, is a traditional linear programming feasibility problem. The colourful Carathéodory Theorem, due to Bárány in 1982, provides a sufficient condition for the existence of a colourful set of points containing p in its convex hull. Bárány's result was generalized by Holmsen et al. in 2008 and by Arocha et al. in 2009 before being recently further generalized by Meunier and Deza. We study algorithms for colourful linear programming under the conditions of Bárány and their generalizations. In particular, we implement the Meunier-Deza algorithm and enhance previously used random case generators. Computational benchmarking and a performance analysis including a comparison between the two algorithms of Bárány and Onn and the one of Meunier and Deza, and random picking are presented.</p> / Master of Science (MSc)
|
33 |
Extended Target Tracking of Convex Polytope Shapes with Maneuvers and Clutter / Extended Target Tracking of Convex Polytope ShapesMannari, Prabhanjan January 2024 (has links)
High resolution sensors such as automotive radar and LiDAR have become prevalent in target tracking applications in recent times. Data from such sensors demands extended target tracking in which, the shape of the target is to be estimated along with the kinematics. Several applications benefit from extended target tracking, for example, autonomous vehicles and robotics.
This thesis proposes a different approach to extended target tracking compared to existing literature. Instead of a single shape descriptor to describe the entire target shape, different parts of the extended target are assumed to be distinct targets constrained by the target rigid body shape. This formulation is able to handle issues such as self-occlusion and clutter which, are not addressed sufficiently in literature. Firstly, a framework for extended target tracking is developed based on the formulation proposed. Using 2D convex hull as a shape descriptor, an algorithm to track 2D convex polytope shaped targets is developed. Further, the point target Probabilistic Multiple Hypotheses Tracker (PMHT) is modified to derive an extended target PMHT (ET-PMHT) equations to track 3D convex polytope shapes, using a Delaunay triangulation to describe the shape. Finally, the approach is extended to handle target maneuvers, as well as, clutter and measurements from the interior of the target.
In all three cases, the issue of self-occlusion is considered and the algorithms are still able to effectively capture the target shape. Since the true target center may not be observable, the shape descriptor abandons the use of target center in the state, and the shape is described by its boundary alone. The shape descriptors also support addition and deletion of faces, which is useful for handling newly visible parts of the target and clutter, respectively. The algorithms proposed have been compared with the existing literature for various scenarios, and it is seen that the proposed algorithms outperform, especially in the presence of self-occlusion. / Thesis / Candidate in Philosophy
|
34 |
Différentes propriétés de marches aléatoires avec contraintes géométriques et dynamiques / Different properties of random walks under geometric and dynamic constraintsChupeau, Marie 05 July 2016 (has links)
Nous déterminons d’abord l’impact d’un plan infini réfléchissant sur l’espace occupé par une marche brownienne bidimensionnelle à un temps fixé, que nous caractérisons par le périmètre moyen de son enveloppe convexe (plus petit polygone convexe contenant toute la trajectoire). Nous déterminons également la longueur moyenne de la portion du plan visitée par le marcheur, et la probabilité de survie d’un marcheur brownien dans un secteur angulaire absorbant.Nous étudions ensuite le temps mis par un marcheur sur réseau pour visiter tous les sites d’un volume, ou une partie d’entre eux. Nous calculons la moyenne de ce temps, dit de couverture, à une dimension pour une marche aléatoire persistante. Nous déterminons également la distribution du temps de couverture et d’autres observables assimilées pour la classe des processus non compacts, qui décrivent un large spectre de recherches aléatoires.Dans un troisième temps, nous calculons et analysons la probabilité de sortie conditionnelle d’un marcheur brownien évoluant dans un intervalle se dilatant ou se contractant à vitesse constante.Enfin, nous étudions plusieurs aspects du modèle du marcheur aléatoire “affamé”, qui meurt si les visites de nouveaux sites, grâce auxquelles il engrange des ressources, ne sont pas suffisamment regulières. Nous en proposons un traitement de type champ moyen à deux dimensions, puis nous déterminons l’impact de la régénération des ressources sur les propriétés de survie du marcheur. Nous considérons finalement un modèle d’exploitation de parcelles de nourriture prenant explicitement en compte le mouvement du marcheur, qui se ramène de manière naturelle au modèle du marcheur aléatoire affamé. / We first determine the impact of an infinite reflecting wall on the space occupied by a planar Brownian motion at a fixed observation time. We characterize it by the mean perimeter of its convex hull, defined as the minimal convex polygon enclosing the whole trajectory. We also determine the mean length of the visited portion of the wall, and the survival probability of a Brownian walker in an absorbing wedge.We then study the time needed for a lattice random walker to visit every site of a confined volume, or a fraction of them. We calculate the mean value of this so-called cover time in one dimension for a persistant random walk. We also determine the distribution of the cover time and related observables for the class of non compact processes, which describes a wide range of random searches.After that, we calculate and analyze the splitting probability of a one-dimensional Brownian walker evolving in an expanding or contracting interval.Last, we study several aspects of the model of starving random walk, where the walker starves if its visits to new sites, from which it collects resources, are not regular enough. We develop a mean-field treatment of this model in two dimensions, then determine the impact of regeneration of resources on the survival properties of the walker. We finally consider a model of exploitation of food patches taking explicitly into account the displacement of the walker in the patches, which can be mapped onto the starving random walk model.
|
35 |
Kvantitativní slabá kompaktnost / Quantitative weak compactnessRolínek, Michal January 2012 (has links)
In this thesis we study quantitative weak compactness in spaces (C(K), τp) and later in Banach spaces. In the first chapter we introduce several quantities, which in different manners measure τp-noncompactness of a given uniformly bounded set H ⊂ RK . We apply the results in Banach spaces in chapter 2, where we prove (among others) a quantitative version of the Eberlein-Smulyan theorem. In the third chapter we focus on convex closures and how they affect measures of noncompactness. We prove a quantitative version of the Krein-Smulyan theorem. The first three chapters show that measuring noncompactness is intimately related to measuring distances from function spaces. We follow this idea in chapters 4 and 5, where we measure distances from Baire one functions first in RK and later also in Banach spaces. 1
|
36 |
Anomalous Diffusion in EcologyLukovic, Mirko 06 February 2014 (has links)
No description available.
|
37 |
Teoremas de semiespaço para superfícies mínimasSilva, Sylvia Ferreira da 20 March 2017 (has links)
Submitted by ANA KARLA PEREIRA RODRIGUES (anakarla_@hotmail.com) on 2017-09-01T13:15:28Z
No. of bitstreams: 1
arquivototal.pdf: 612605 bytes, checksum: 21376fa219dbfadac44b0c5d02d91cd3 (MD5) / Approved for entry into archive by Viviane Lima da Cunha (viviane@biblioteca.ufpb.br) on 2017-09-01T15:55:26Z (GMT) No. of bitstreams: 1
arquivototal.pdf: 612605 bytes, checksum: 21376fa219dbfadac44b0c5d02d91cd3 (MD5) / Made available in DSpace on 2017-09-01T15:55:26Z (GMT). No. of bitstreams: 1
arquivototal.pdf: 612605 bytes, checksum: 21376fa219dbfadac44b0c5d02d91cd3 (MD5)
Previous issue date: 2017-03-20 / Conselho Nacional de Pesquisa e Desenvolvimento Científico e Tecnológico - CNPq / In this work we detail the results submitted by Ho man and Meeks in \The strong
half-space theorem for minimal surfaces". The rst results are half-space theorems for
minimal surfaces in R3 which have been generalized for other ambients, as have been
done by Daniel, B./ Hauswirth, L., e Daniel, B./ Meeks, W. H. III. The third and last
one result, caracterize convex hull in n- dimensional Euclidean spaces. / Neste trabalho detalhamos os resultados apresentados por William H. Meeks e
David A. Ho man em \The strong half-space theorem for minimal surfaces", . Os
primeiros resultados s~ao teoremas de semiespa co para superf cies m nimas no R3, os
quais tem sido generalizados para outros ambientes como foi feito por Daniel, B./
Hauswirth, L., e Daniel, B./ Meeks, W. H. III. O terceiro e ultimo resultado, caracteriza
fechos convexos no espa co euclidiano n-dimensional.
|
38 |
Wavelet-based segmentation and convex hull approaches for quantitative analysis of biological imaging dataTzu-Ching Wu (7819853) 14 January 2021 (has links)
<p>Imaging-based
analysis of developmental processes are crucial to understand the mechanisms
controlling plant and animal development. In vertebrate embryos such as the
zebrafish embryo, nuclei segmentation plays an important role to detect and
quantify nuclei over space and time. However, limitations of the image quality
and segmentation methods may affect the segmentation performance. In plant
including studies on Arabidopsis epidermis growth, cellular shape change
dictates organ size control and growth behavior, and quantitative image
analysis of dynamics cell patterning is needed to link the cause and effect
between cells and organs. Here we provide a series of new quantitative
biological imaging methods a series of new quantitative biological imaging
methods and tools including wavelet-based segmentation method in zebrafish
embryo development studies and convex hull approach for quantitative shape
analyses of lobed plant cells.</p>
<p> </p>
<p>Identification
of individual cells in tissues, organs, and in various developing systems is a
well-studied problem because it is an essential part of objectively analyzing
quantitative images in numerous biological contexts. In this paper we present a size dependent
wavelet-based segmentation method that provides robust segmentation without any
preprocessing, filtering or fine-tuning steps, and is robust to the
signal-to-noise ratio (SNR). The program separates overlapping nuclei,
identifies cell cycle states and minimizes intensity attenuation in object
identification. The wavelet-based methods presented herein achieves robust
segmentation results with respect to True Positive rate, Precision, and
segmentation accuracy compared with other commonly used methods. We applied the
segmentation program to Zebrafish embryonic development IN TOTO quantification
and developed an automatic interactive imaging analysis platform named
WaveletSEG, that integrates nuclei segmentation, image registration, and nuclei
shape analysis. A set of additional functions we developed include a 3D ground
truth annotation tool, a synthetic image generator, a segmented training
datasets export tool, and data visualization interfaces are also incorporated
in WaveletSEG for additional data analysis and data validation. </p>
<p> </p>
<p>In
addition to our work in Zebrafish, we developed image analysis tools for
quantitative studies of cell-to-organ in plants. Given the importance of the
epidermis and this particular cell type for leaf expansion, there is a strong
need to understand how pavement cells morph from a simple polyhedral shape into
highly lobed and interdigitated cells. Currently, it is still unclear how and
when patterns of lobing are initiated in pavement cells, and one major
technological bottleneck to address the problem is the lack of a robust and
objective methodology to identify and track lobing events during the transition
from simple cell geometry to lobed cells. We develop a convex-hull-based
algorithm termed LobeFinder to identify lobes, quantify geometric properties,
and create a useful graphical output for further analysis. The algorithm is
validated against manually curated cell images of pavement cells of widely
varying sizes and shapes. The ability to objectively count and detect new lobe
initiation events provides an improved quantitative framework to analyze mutant
phenotypes, detect symmetry-breaking events in time-lapse image data, and
quantify the time-dependent correlation between cell shape change and
intracellular factors that may play a role in the morphogenesis process.</p>
|
39 |
Development of virtual reality tools for arthroscopic surgery trainingYaacoub, Fadi 12 November 2008 (has links) (PDF)
La chirurgie arthroscopique présente actuellement un essor très important pour le bénéfice du plus grand nombre des patients. Cependant, cette technique possède un certain nombre d'inconvénients et il est donc nécessaire pour le médecin de s'entrainer et répéter ses gestes afin de pouvoir exécuter ce type d'opération d'une façon efficace et certaine. En effet, les méthodes traditionnelles d'enseignement de la chirurgie sont basées sur l'autopsie des cadavres et l'entrainement sur des animaux. Avec l'évolution de notre société, ces deux pratiques deviennent de plus en plus critiquées et font l'objet de réglementations très restrictives. Afin d'atteindre un niveau plus élevé, de nouveaux moyens d'apprentissage sont nécessaires pour les chirurgiens. Récemment, la réalité virtuelle commence d'être de plus en plus utilisée dans la médecine et surtout la chirurgie. Les simulateurs chirurgicaux sont devenus une des matières les plus récentes dans la recherche de la réalité virtuelle. Ils sont également devenus une méthode de formation et un outil d'entrainement valable pour les chirurgiens aussi bien que les étudiants en médecine. Dans ce travail, un simulateur de réalité virtuelle pour l'enseignement de la chirurgie arthroscopique, surtout la chirurgie du poignet, a été préesenté. Deux questions principales sont abordées : la reconstruction et l'interaction 3-D. Une séquence d'images CT a été traitée afin de générer un modèle 3-D du poignet. Les deux principales composantes de l'interface du système sont illustrées : l'interaction 3-D pour guider les instruments chirurgicaux et l'interface de l'utilisateur pour le retour d'effort. Dans ce contexte, les algorithmes qui modélisent les objets en utilisant les approches de "Convex Hull" et qui simulent la détection de collision entre les objets virtuels en temps réel, sont présentés. En outre, un dispositif de retour d'effort est utilisé comme une interface haptique avec le système. Cela conduit au développement d'un système à faible coût, avec les mêmes avantages que les appareils professionnels. A cet égard, l'arthroscopie du poignet peut être simulée et les étudiants en médecine peuvent facilement utiliser le système et peuvent apprendre les compétences de base requises en sécurité, flexibilité et moindre coût
|
40 |
Introduction et analyse des schémas de cotation en avance de phase / Introduction and analysis of the tolerancing schemes, during the first design stages.Socoliuc, Michel 09 July 2010 (has links)
Il y a peu, j’ai pu lire « qu’on pouvait considérer que les ponts romains de l’Antiquité, pouvaient être considérés comme inefficaces, au regard des standards actuels : ils utilisaient trop de pierre et énormément de travail était nécessaire à leur construction. Au fil des années, pour répondre à une problématique équivalente, nous avons appris à utiliser moins de matériaux et à réduire la charge de travail ». Ces problématiques nous les retrouvons aussi en conception mécanique où l’on essaye en continu de proposer des systèmes de plus en plus performants mais devant être conçus en moins de temps, étant moins cher à produire et fournissant des prestations au moins équivalentes à ce qui a déjà été conçu.Au cours d'un processus de conception classique, les concepteurs définissent une géométrie ne présentant aucun défaut puis, étant donné que les moyens de production ne permettent pas d’obtenir de telles pièces finales, ils spécifient les schémas de cotation définissant les écarts acceptables garantissant le bon fonctionnement du système. Seulement, cela est fait après avoir produit les dessins détaillés, c'est à dire trop tard. Pour répondre à cette problématique, je présenterai l’intégration, très tôt dans le cycle de vie de conception, d’un processus de validation optimisé, basé sur une maquette numérique directement en lien avec sa représentation fonctionnelle (maquette fonctionnelle), et permettant de valider des schémas de cotation 3D standardisés.Je décrirai d'abord ce que l’on entend par « maquette fonctionnelle » et surtout ce que cette nouvelle définition apporte en plus de la définition numérique. Une fois ce point abordé, je détaillerai les liens qui permettent d’avoir une unicité de l’information au sein de l’environnement de travail, tout comme les processus qui permettent de lier les représentations fonctionnelles et numériques.Ensuite, je détaillerai les processus basés sur ces concepts, et qui ont pour but de valider les choix qui sont effectués en avance de phase au niveau des schémas de cotation. Pour ce faire, je commencerai par présenter l’analyse au pire des cas (utilisant les modèles de domaines écarts notamment), permettant de garantir le bon fonctionnement de l’ensemble mécanique, dans le cas ou touts les écarts se retrouvent à l’intérieur des zones respectives (définies par les tolérances).Enfin, je finirai par introduire ce qu’une couche statistique, couplée à l’analyse au pire des cas utilisant les enveloppes convexes, peut amener dans le contexte industriel et notamment sous la contrainte temporelle. / Some time ago, I read "According to our current standards, we could consider Roman bridges of ancient times as ineffective: they used too much stone and hard work during construction. Over the years, in order to respond to similar problems, we learned how to use fewer materials and reduce the workload. These issues can also be found in the mechanical design field, where we continuously try to offer more efficient systems, but which have to be designed in less time, be cheaper to produce and provide benefits at least equivalent to what has already been designed.During a conventional design process, designers define the ideal geometries and - given that the machining tools cannot produce mechanical parts without any geometrical defects - specify the associated tolerancing schemes. These tolerancing schemes define acceptable geometrical deviations, thus providing a well-functioning system. Unfortunately this is done after having designed detailed parts and thus, too late.In order to address this problem, I will begin by introducing the integration, in the first design stages, of a new optimized validation process based on a Digital Mock-Up, directly linked to its functional representation (Functional Mock-Up), in order to validate 3D standardized tolerancing schemes. I'll first describe what is meant by "Functional Mock-Up" (FMU) and specify which information is added to the Digital Mock-Up (DMU). Once that is done, I will detail the relationship that leads to the uniqueness of the information and the processes linking the Functional and Digital representations.Then, I'll detail the processes based on these concepts, which aim to validate the tolerancing schemes, during the early design stages. To do this, I'll begin by introducing the worst case analysis (using the deviation domain model), which ensures the proper functioning of the mechanical system. Finally, I will end this by introducing the benefits that can be brought, by coupling a statistical layer to the worst case analysis (using the convex hull).
|
Page generated in 0.0611 seconds