• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 109
  • 32
  • 26
  • 14
  • 5
  • 4
  • 1
  • 1
  • 1
  • 1
  • 1
  • Tagged with
  • 222
  • 222
  • 109
  • 82
  • 48
  • 44
  • 43
  • 40
  • 33
  • 31
  • 29
  • 27
  • 21
  • 20
  • 20
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
61

Merging meshes using dynamic regular triangulation / Combinação de malhas utilizando triangulações regulares dinâmicas

Silva, Luis Fernando Maia Santos January 2010 (has links)
Malhas simpliciais são utilizadas em várias áreas da Computação Gráfica e Engenharia, como por exemplo, em vizualização, simulação, prototipação, além de outras aplicações. Este tipo de malha é, geralmente, utilizada como aproximações discretas de espaços contínuos, onde eles oferecem representações flexíveis e eficientes. Muito esforço é gasto visando gerar malhas de boa qualidade, porém, em alguns casos as malhas acabam sendo modificadas. Entretanto, este tipo de operação é geralmente custosa e inflexível, o que pode resultar na geraão de malhas bem diferentes das originais. A habilidade de manipular cenas dinâmicas revela-se um dos problemas mais desafiadores da computação gráfica. Este trabalho propõe um método alternativo para atualizar malhas simpliciais que vai além de mudanças geométricas e topológicas. Tal método explora uma das propriedade das Tringulações de Delaunay com Pesos, que permite a usá-las para definir implicitamente as relações de conectividade de uma malha. Ao contrário de manter as informações de conectividade explicitamente, a atual abordagem simplesmente armazena uma coleção de pesos associados a cada vértice. Além disso, criamos um algoritmo para calcular uma Tringulação de Delaunay com Pesos a partir de uma dada triangulação. O algoritmo consiste em uma busca em largura que atribui pesos aos vértices, e uma estratégia de de subdivisão para assegurar que a triangulação reconstruída será correspondente à original. Este método apresenta diversas aplicações e, em particular, permite a criação de um sistema simples de realizar combinação entre triangulações, que será ilustrada com exemplos em 2D e 3D. / Simplicial meshes are used in many fields of Computer Graphics and Engineering, for instance, in visualization, simulation, prototyping, among other applications. This kind of mesh is often used as discrete approximations of continuous spaces, where they offer flexible and efficient representations. Considerable effort is spent in generating good quality meshes, but in some applications the meshes can be modified over time. However, this kind of operation is often very expensive and inflexible, sometimes leading to results very different from the original meshes. The ability to handle dynamic scenes reveals itself as one of the most challenging problems in computer graphics. This work proposes an alternative technique for updating simplicial meshes that undergo geometric and topological changes. It explores the property that a Weighted Delaunay Triangulation (WDT) can be used to implicitly define the connectivity of a mesh. Instead of explicitly maintaining connectivity information, this approach simply keeps a collection of weights associated to each vertex. It consists of an algorithm to compute a WDT from any given triangulation, which relies on a breadth-first traversal to assign weights to vertices, and a subdivision strategy to ensure that the reconstructed triangulation conforms with the original one. This technique has many applications and, in particular, it allows for a very simple method of merging triangulations, which is illustrated with both 2D and 3d examples.
62

Clustering de trajetórias / Trajectory clustering

Marcio Takashi Iura Oshiro 16 September 2015 (has links)
Esta tese teve como objetivo estudar problemas cinéticos de clustering, ou seja, problemas de clustering nos quais os objetos se movimentam. O trabalho se concentrou no caso unidimensional, em que os objetos são pontos se movendo na reta real. Diversas variantes desse caso foram abordadas. Em termos do movimento, consideramos o caso em que cada ponto se move com uma velocidade constante num dado intervalo de tempo, o caso em que os pontos se movem arbitrariamente e temos apenas as suas posições em instantes discretos de tempo, o caso em que os pontos se movem com uma velocidade aleatória em que se conhece apenas o valor esperado da velocidade, e o caso em que, dada uma partição do intervalo de tempo, os pontos se movem com velocidades constantes em cada subintervalo. Em termos do tipo de clustering buscado, nos concentramos no caso em que o número de clusters é um dado do problema e consideramos diferentes medidas de qualidade para o clustering. Duas delas são tradicionais para problemas de clustering: a soma dos diâmetros dos clusters e o diâmetro máximo de um cluster. A terceira medida considerada leva em conta a característica cinética do problema, e permite, de uma maneira controlada, que o clustering mude com o tempo. Para cada uma das variantes do problema, são apresentados algoritmos, exatos ou de aproximação, alguns resultados de complexidade obtidos, e questões que ficaram em aberto. / This work aimed to study kinetic problems of clustering, i.e., clustering problems in which the objects are moving. The study focused on the unidimensional case, where the objects are points moving on the real line. Several variants of this case have been discussed. Regarding the movement, we consider the case where each point moves at a constant velocity in a given time interval, the case where the points move arbitrarily and we only know their positions in discrete time instants, the case where the points move at a random velocity in which only the expected value of the velocity is known, and the case where, given a partition of the time interval, the points move at constant velocities in each sub-interval. Regarding the kind of clustering sought, we focused in the case where the number of clusters is part of the input of the problem and we consider different measures of quality for the clustering. Two of them are traditional measures for clustering problems: the sum of the cluster diameters and the maximum diameter of a cluster. The third measure considered takes into account the kinetic characteristic of the problem, and allows, in a controlled manner, that a cluster change along time. For each of the variants of the problem, we present algorithms, exact or approximation, some obtained complexity results, and open questions.
63

A Computational Geometry Approach to Digital Image Contour Extraction

Tejada, Pedro J. 01 May 2009 (has links)
We present a method for extracting contours from digital images, using techniques from computational geometry. Our approach is different from traditional pixel-based methods in image processing. Instead of working directly with pixels, we extract a set of oriented feature points from the input digital images, then apply classical geometric techniques, such as clustering, linking, and simplification, to find contours among these points. Experiments on synthetic and natural images show that our method can effectively extract contours, even from images with considerable noise; moreover, the extracted contours have a very compact representation.
64

The use of geometric structures in graphics and optimization / L'utilisation des structures géométriques pour synthèse d'image et optimisation

Bus, Norbert 07 October 2015 (has links)
Les données du monde réel ont manifestement une composante géométrique importante et suggère les patterns géométriques signifiants. Les méthodes qui utilisent la nature géométrique des données sont activement développés dans plusieurs domaines scientifiques, comme, par exemple, la géométrie algorithmique, la géométrie discrète, la synthèse d'images, la vision par ordinateur. Dans le travail présent, nous utilisons les structures géométriques afin de modéliser des algorithmes efficaces pour deux domaines, celui de synthèse d'images et de l'optimisation combinatoire. Dans la première partie il s'agit de la structure de données géométriques, appelé une décomposition bien-séparée, et son application pour un des problèmes les plus difficiles dans la synthèse d'images, un efficace rendu photo-réalistique. Une solution consiste à appliquer toute une famille de méthodes de many-lights qui fait une approximation d'illumination globale par calcule individuelle d'illumination avec un grand nombre de VPLs (virtual point light) répartis sur les surfaces. L'application individuelle de chacun VPL résulte dans un grand nombre des calculs. Une des stratégies de la réussite pour réduire les computations est de faire les clusteurs considérés qui sont consideré comme une seul émetteur. Nous utilisons la décomposition bien-séparée de points comme le fondement de la structure des données susceptible de procéder à un calcul préliminaire et de conserver d'une façon compacte un grand nombre des clusterisations individuels potentiels ce qui montre que la clusterisation des VPL plus correspondante peut être extraite de cette structure de données d'une manière efficace. Nous montrons qu'au lieu de regroupper les points et/ou VPL indépendemment il vaut mieux produire les clusteurs sur l'espace de produit du nombre des points à nuancer et un groupe de VPL à la base de l'illumination des paires induite. En plus, nous proposons une technique adaptive afin d'échantillonner pour réduire le nombre des demandes de vérifications de visibilité pour chaque clusteur de l'espace de produit. Notre méthode consiste à détenir chaque émetteur qui peut être rapproché par VPL, matériaux spéculaire et à performer les méthodes précédents réconnus les meilleurs jusqu'au présent. La deuxième partie est consacrée au développement de nouveaux algorithmes d'approximation pour un problème fondamental de NP complet dans la géométrie algorithmique, précisément le problème du hitting set, avec une précision pour le cas d'un groupe de points et d'un groupe de disques, nous souhaiterons calculer les plus petits nombre du points qui touche tous les disques. Il arrive que les algorithmes efficaces à détecter le hitting set repose sur une structure géométrique clée, appelée epsilon-net. Nous donnons un algorithme utilisant uniquement les triangulisations de Delaunay pour construire les epsilon-nets de taille 13.4/epsilon. Nous donnons une implémentation pratique de la technique à calculer les hitting sets dans le temps quasi-linéaire en utilisant des epsilon-nets de petites tailles. Nos résultats aboutissent à une approximation de 13.4 pour le problème de hitting set par un algorithme qui fonctionne même pour les grands ensembles de données. Pour les ensembles de taille plus petite, nous proposons une implémentation de la technique de recherche locale avec une approximation bornes supérieures, avec le résultat obtenu d'approximation de (8 + epsilon) dans le temps O(n^{2.34}) / Real-world data has a large geometric component, showing significant geometric patterns. How to use the geometric nature of data to design efficient methods has became a very important topic in several scientific fields, e.g., computational geometry, discrete geometry, computer graphics, computer vision. In this thesis we use geometric structures to design efficient algorithms for problems in two domains, computer graphics and combinatorial optimization. Part I focuses on a geometric data structure called well-separated pair decomposition and its usage for one of the most challenging problems in computer graphics, namely efficient photo-realistic rendering. One solution is the family of many-lights methods that approximate global illumination by individually computing illumination from a large number of virtual point lights (VPLs) placed on surfaces. Considering each VPL individually results in a vast number of calculations. One successful strategy the reduce computations is to group the VPLs into a small number of clusters that are treated as individual lights with respect to each point to be shaded. We use the well-separated pair decomposition of points as a basis for a data structure for pre-computing and compactly storing a set of view independent candidate VPL clusterings showing that a suitable clustering of the VPLs can be efficiently extracted from this data structure. We show that instead of clustering points and/or VPLs independently what is required is to cluster the product-space of the set of points to be shaded and the set of VPLs based on the induced pairwise illumination. Additionally we propose an adaptive sampling technique to reduce the number of visibility queries for each product-space cluster. Our method handles any light source that can be approximated with virtual point lights (VPLs), highly glossy materials and outperforms previous state-of-the-art methods. Part II focuses on developing new approximation algorithms for a fundamental NP-complete problem in computational geometry, namely the minimum hitting set problem with particular focus on the case where given a set of points and a set of disks, we wish to compute the minimum-sized subset of the points that hits all disks. It turns out that efficient algorithms for geometric hitting set rely on a key geometric structure, called epsilon-net. We give an algorithm that uses only Delaunay triangulations to construct epsilon-nets of size 13.4/epsilon and we provide a practical implementation of a technique to calculate hitting sets in near-linear time using small sized epsilon-nets. Our results yield a 13.4 approximation for the hitting set problem with an algorithm that runs efficiently even on large data sets. For smaller datasets, we present an implementation of the local search technique along with tight approximation bounds for its approximation factor, yielding an (8 + epsilon)-approximation algorithm with running time O(n^{2.34})
65

Multi-covering problems and their variants

Bhowmick, Santanu 01 May 2017 (has links)
In combinatorial optimization, covering problems are those problems where given a ground set and a family of subsets of the ground set, the objective is to find a minimum cost set of subsets whose union contains the ground set. We consider covering problems in the context of Computational Geometry, which is a subfield of Computer Science that deals with problems associated with geometric objects such as points, lines, disks, polygons etc. In particular, geometric covering is an active research area, where the ground set and the family of subsets are induced by geometric objects. Covering problems in combinatorial optimizations often have a geometric analogue that arises naturally, and though such problems remain NP-hard, it is often possible to exploit the geometric properties of the set system to obtain better approximation algorithms. In this work, the fundamental problem that we consider is a generalization of geometric covering, where each element in the ground set may need to covered by more than one subset. To be precise, the problem is defined as follows: given two sets of points X, Y and a coverage function κ : X → Z+ ∪ {0}, construct balls centered on the points in Y such that each point in X is covered by at least κ(x) distinct balls. The objective in this problem is to minimize the total cost, which is a function of the radii of the balls. This problem is termed as the metric multi-cover (MMC) problem. We first consider version of the MMC problem where κ(x) = 1 for all clients, i.e. the 1-covering case. The known results that give a constant factor approximation for this problem are variations of LP-based primal-dual algorithm. We use a modified local search technique, motivated by geometric idea, to derive a simple, constant- factor approximation for this problem in Chapter 2. We then look at the MMC problem where the point sets X,Y are in the Euclidean plane, and each client x ∈ X needs to be covered by at least κ(x) distinct disks centered on the points in Y . In Chapter 4, we give the first polynomial time constant factor approximation for this problem, in which the constant is independent of the coverage function κ. Our solution also has an incremental property, which allows the algorithm to handle increases in the coverage requirement by increasing the radii of the current server disks, without affecting the approximation factor. In the next problem, we move from the Euclidean plane to arbitrary metric spaces where we consider the uniform MMC problem. In this problem, each client x has the demand κ(x) = k, where k > 0 is an integer. We give the first constant factor approximation (independent of k) for this problem. The key contribution that led to this result is the formulation of a partitioning scheme of the servers in the uniform MMC problem, that reduces the uniform MMC problem to k instances of 1-covering problem, while preserving the optimality of the solution to a constant multiplicative factor. We present the partitioning scheme as an independent result in Chapter 5, which we then use to solve the uniform MMC problem in Chapter 6. The MMC problem with arbitrary coverage function κ is then considered in Chapter 7. The key challenge that the non-uniform version presents is that the symmetry of the server partitioning scheme breaks down as the coverage demands of clients are independent of each other. We present a constant factor algorithm for this problem in Chapter 7. The last problem that we consider is the t-MMC problem, which is a restricted version of the uniform MMC problem. The objective is to compute a cover in which each client is covered by at least k distinct server disks, using atmost t server disks in total. This problem is a generalization of the clustering problem (where k = 1), and to our knowledge this is the first time this generalization has been considered. We give a constant factor approximation for this problem in Chapter 8.
66

Clusters and covers: geometric set cover algorithms

Gibson, Matthew Richard 01 May 2010 (has links)
The set cover problem is a well studied problem in computer science. The problem cannot be approximated to better than an log n-factor in polynomial time unless P = NP and has an O(log n)-factor approximation algorithm. We consider several special cases of the set cover problem in which geometry plays a key role. With geometric structure introduced to the problem, it may be possible to construct approximation algorithms with approximation ratios asymptotically better than log n. The first problem we consider is the decomposing coverings problem. Here, we consider a combinatorial problem: given a collection of points in the plane and a collection of objects in the plane such that each point is contained in at least k objects, partition the objects into as many sets as possible so that each set covers all of the points. We show that if the objects are translates of a convex polygon, then it is possible to partition the translates into Ω(k) covers. The second problem we consider is the planar sensor cover problem. This problem is a generalization of the decomposing coverings problem. We are given a collection of points in the plane and a collection of objects in the plane. Each of the objects can be thought of as a sensor. The sensors have a duration which can be thought of as the battery life of the sensor. The planar sensor cover problem is to schedule a start time to each of the sensors so that the points are covered by a sensor for as long as possible. We give a constant factor approximation for this problem. The key contribution to this result is a constant factor approximation to a one-dimensional version of the problem called the restricted strip cover (RSC) problem. Our result for RSC improves upon the previous best O(log log log n)-approximation, and our result for the planar sensor cover problem improves upon the previous best O(log n)-approximation. The next problem we consider is the metric clustering to minimize the sum of radii problem. Here, we are given an n-point metric (P,d) and an integer k > 0. We are interested in covering the points in P with at most k balls so that the sum of the radii of the balls is minimized. We give a randomized algorithm which solves the problem exactly in nO(log n log Δ) time, where Δ is the ratio of the maximum interpoint distance to the minimum interpoint distance. We also show that the problem is NP-hard, even in metrics induced by weighted planar graphs and when the metric has constant doubling dimension. The last problem we consider is the minimum dominating set problem for disk graphs. In this problem, we are given a set of disks in the plane, and we want to choose a minimum-cardinality subset of disks such that every disk is either in the set or intersects a disk in the set. For any ε > 0, we show that a simple local search algorithm is a (1+ ε)-approximation for the problem which improves upon the previous best O(log n)-approximation algorithm.
67

Geometric Facility Location Problems on Uncertain Data

Zhang, Jingru 01 August 2017 (has links)
Facility location, as an important topic in computer science and operations research, is concerned with placing facilities for "serving" demand points (each representing a customer) to minimize the (service) cost. In the real world, data is often associated with uncertainty because of measurement inaccuracy, sampling discrepancy, outdated data sources, resource limitation, etc. Hence, problems on uncertain data have attracted much attention. In this dissertation, we mainly study a classical facility location problem: the k- center problem and several of its variations, on uncertain points each of which has multiple locations that follow a probability density function (pdf). We develop efficient algorithms for solving these problems. Since these problems more or less have certain geometric flavor, computational geometry techniques are utilized to help develop the algorithms. In particular, we first study the k-center problem on uncertain points on a line, which is aimed to find k centers on the line to minimize the maximum expected distance from all uncertain points to their expected closest centers. We develop efficient algorithms for both the continuous case where the location of every uncertain point follows a continuous piecewise-uniform pdf and the discrete case where each uncertain point has multiple discrete locations each associated with a probability. The time complexities of our algorithms are nearly linear and match those for the same problem on deterministic points. Then, we consider the one-center problem (i.e., k= 1) on a tree, where each uncertain point has multiple locations in the tree and we want to compute a center in the tree to minimize the maximum expected distance from it to all uncertain points. We solve the problem in linear time by proposing a new algorithmic scheme, called the refined prune-and-search. Next, we consider the one-dimensional one-center problem of uncertain points with continuous pdfs, and the one-center problem in the plane under the rectilinear metric for uncertain points with discrete locations. We solve both problems in linear time, again by using the refined prune-and-search technique. In addition, we study the k-center problem on uncertain points in a tree. We present an efficient algorithm for the problem by proposing a new tree decomposition and developing several data structures. The tree decomposition and these data structures may be interesting in their own right. Finally, we consider the line-constrained k-center problem on deterministic points in the plane where the centers are required to be located on a given line. Several distance metrics including L1, L2, and L1 are considered. We also study the line-constrained k-median and k-means problems in the plane. These problems have been studied before. Based on geometric observations, we design new algorithms that improve the previous work. The algorithms and techniques we developed in this dissertation may and other applications as well, in particular, on solving other related problems on uncertain data.
68

Mathematical modelling of granulation processes : a thesis presented in partial fulfilment of the requirements for the degree of Doctor of Philosophy in Mathematical Physics at Massey University, Palmerston North, New Zealand

Rynhart, Patrick Reuben January 2004 (has links)
Granulation is an industrial process where fine particles are bound together into larger granules. The process has numerous applications including the manufacture of pharmaceuticals and the production of cosmetics, chemicals, detergents and fertilisers. This thesis studies aspects of wet granulation which involves the application of a viscous binder, usually in the form of a spray, to an agitated bed of powder particles. Individual powder particles may adhere together, joined by small quantities of binder fluid called liquid bridges. By a process of collision and adherence additional particles may join the newly formed agglomerates. Agglomerates may also coalesce together which is a process that leads to granule formation. On the completion of this process, granules are typically dried.This thesis studies wet granulation on three different levels. First, micro-level investigations of liquid bridges between two and three particles are performed. For the two-particle case, the fluid profile of static (stationary) and dynamic (moving) liquid bridges is investigated. For the static case, a numerical solution to the Young-Laplace equation is obtained; this relates the volume of binder fluid to liquid bridge properties such as the inter-particle force. An analytic solution is also obtained, providing the liquid bridge profile in terms of known mathematical functions. For both solutions, the radii of the (spherical) primary particles may be different. The dynamic case is then studied using the Navier-Stokes equations with the low Reynolds number approximation. The motion of the approaching particles is shown to be damped by the viscosity of the liquid bridge. Static liquid bridges between three equally sized primary particles are then studied. Symmetry of the problem is used to obtain a numerical solution to the Young-Laplace equation. Liquid bridge properties are calculated in terms of the binder fluid volume. Experimental agreement is provided.Secondly, a model to estimate the stickiness (fractional wet surface area) of agglomerates is proposed. Primary particles are approximated as spheres and are added one at a time in a closely packed arrangement. The model includes parameters to control the inter-particle separation distance and the fluid saturation state. Computational geometry is used to obtain results which relate the number of particles and the volume of binder fluid to the stickiness of the agglomerates.Finally, a population balance model for wet granulation is developed by extending an earlier model to incorporate the effects of binder fluid. Functions for the inter-particle collision rate and drying rate are proposed, including functions which are derived from the geometric model, described above, for the case of maximum particle consolidation. The model is solved numerically for a range of coalescence kernels and results are presented which show the effect of binder volume and the drying rate.
69

Aide géométrique à l'aménagement de satellites

De Lange, Eelco 19 February 1998 (has links) (PDF)
Nous nous intéressons dans cette thèse à des problèmes d'aménagement de satellites. Le but est de développer des algorithmes efficaces et robustes menant à des outils simples pour aider l'ingénieur du bureau d'études dans ses tâches répétitives d'aménagement. Nous proposons un algorithme optimal qui calcule une section plane de la somme de Minkowski de deux polyèdres convexes et un algorithme efficace qui calcule l'union d'un ensemble de polygones par division et fusion. Nous avons soigneusement analysé la précision numérique nécessitée pour 1e fonctionnement correct de ces deux algorithmes et le traitement des dégénérescences géométriques qui peuvent apparaître. Nous avons conçu le logiciel GEOTOOLS pour le placement d'une suite d'équipements et en particulier des antennes qui ont un champ de vision. GEOTOOLS permet le placement interactif d'un objet dans un aménagement partiel en visualisant les contraintes imposées par cet aménagement (l'espace admissible). La deuxième partie de cette thèse consiste en une expérimentation de GEOTOOLS sur des modèles réalistes de satellites en plaçant des séquences d'antennes ínteractivement et automatiquement.
70

Algorithmes géométriques adaptatifs

Nielsen, Frank 27 September 1996 (has links) (PDF)
Les travaux effectués lors de cette thèse portent sur 1a construction d'algorithmes géométriques dit adaptatifs dans 1e sens ou leur temps de calcul s'adapte a la solution construite. Nous décrivons tout d'abord les principaux paradigmes qui permettent d'obtenir des algorithmes adaptatifs. Puis , nous proposons un algorithme quasi-optimal adaptatif pour le calcul d'enveloppe convexe d'objets planaires dont la complexité de l'enveloppe convexe de toute paire soit bornée. L'algorithme est basé sur une approche composite combinant 1e paradigme mariage avant conquête et 1a méthodologie du groupement en paquets. Nous considérons également le calcul de l'enveloppe supérieure de fonctions et la décomposition convexe partielle d'un ensemble de points. Finalement, nous nous sommes intéressés aux problèmes de perçabílíté d'objets qui ont été montré NP-difficiles. En premier lieu, nous avons étudié le cas de boîtes ísothétíques en donnant une heuristique adaptative dont 1a précision soit elle-même adaptative. Ensuite, nous avons étudié les propriétés combinatoires des objets convexes pour 1a perçabílíté. Nous obtenons une batterie d'algorithmes pour des classes variées d'objets dont certains prouvent l'exístence de théorèmes de type Helly.

Page generated in 0.1231 seconds