Spelling suggestions: "subject:"computational geometry,"" "subject:"eomputational geometry,""
171 |
Uma técnica de decomposição a priori para geração paralela de malhas bidimensionais / A priori decomposition technique for parallel generation of two-dimensional meshesTeixeira, Daniel Nascimento January 2014 (has links)
TEIXEIRA, Daniel Nascimento. Uma técnica de decomposição a priori para geração paralela de malhas bidimensionais. 2014. 94 f. : Dissertação (mestrado) - Universidade Federal do Ceará, Centro de Ciências, Departamento de Computação, Fortaleza-CE, 2014. / Submitted by guaracy araujo (guaraa3355@gmail.com) on 2016-06-15T19:57:36Z
No. of bitstreams: 1
2014_dis_dnteixeira.pdf: 17919971 bytes, checksum: 092ad12b33cf64a31552e6a839a5a5bc (MD5) / Approved for entry into archive by guaracy araujo (guaraa3355@gmail.com) on 2016-06-15T19:58:41Z (GMT) No. of bitstreams: 1
2014_dis_dnteixeira.pdf: 17919971 bytes, checksum: 092ad12b33cf64a31552e6a839a5a5bc (MD5) / Made available in DSpace on 2016-06-15T19:58:41Z (GMT). No. of bitstreams: 1
2014_dis_dnteixeira.pdf: 17919971 bytes, checksum: 092ad12b33cf64a31552e6a839a5a5bc (MD5)
Previous issue date: 2014 / This work describes a technique of two-dimensional domain decomposition for parallel mesh generation. This technique works for both distributed and shared memory and has the freedom to use any data structure that manages rectangular regions parallel to the axes to decompose the domain given as input, such as a quaternary tree (quadtree) or a binary space decomposition (bsp), for example. Any process of mesh generation that respects the prerequisites established can be used in the subdomains created, for instance, Delaunay or Advancing Front, among others. This technique is called a priori because the mesh on the interface of the subdomains is generated prior to the their internal meshes. The load estimation for each sub-domain in this work is performed with the aid of a refined quadtree, whose level of refinement guides the creation of edges that are defined from the bounderies of only inner cells. This way of estimate load produces results that accurately represent the number of elements to be generated in each subdomain. That contributes to a good partitioning of the domain, making the mesh generation in parallel be significantly faster than the serial generation. Furthermore, the quality of the generated mesh in parallel is qualitatively equivalent to that generated serially within acceptable limits. / Este trabalho descreve uma técnica de decomposição de domínios bidimensionais para geração em paralelo de malhas. Esta técnica funciona tanto para memória distribuída quanto compartilhada, além de permitir que se utilize qualquer estrutura de dados que gere regiões quadrangulares paralelas aos eixos para decompor o domínio dado como entrada. Pode se utilizar por exemplo, uma árvore quaternária (quadtree) ou uma partição binária do espaço (bsp). Além disso, qualquer processo de geração de malha que respeite os pré-requisitos estabelecidos pode ser empregado nos subdomínios criados, como as técnicas de Delaunay ou Avanço de Fronteira, dentre outras. A técnica proposta é dita a priori porque a malha de interface entre os subdomínios é gerada antes das suas malhas internas. A estimativa de carga de processamento associada a cada subdomínio é feita nesse trabalho com a ajuda de uma quadtree refinada, cujo nível de refinamento orienta a criação das arestas que são definidas a partir da discretização das fronteiras das células internas. Essa maneira de estimar carga produz resultados que representam, com boa precisão, o número de elementos a serem gerados em cada subdomínio. Isso contribui para um bom particionamento do domínio, fazendo com que a geração de malha em paralelo seja significativamente mais rápida do que a geração serial. Além disso, a qualidade da malha gerada em paralelo é qualitativamente equivalente àquela gerada serialmente, dentro de limites aceitáveis.
|
172 |
[en] STATISTICAL OPTIMIZATION OF SPATIAL HIERARCHICAL STRUCTURES SEARCHS / [pt] OTIMIZAÇÃO ESTATÍSTICA DE BUSCAS PARA ESTRUTURAS HIERÁRQUICAS ESPACIAISRENER PEREIRA DE CASTRO 29 May 2008 (has links)
[pt] Este trabalho surgiu da seguinte observação: os clássicos
algoritmos de busca em 2d-tree começam da raiz para acessar
dados armazenados nas folhas. Entretanto, como as folhas
são os nós mais distantes da raiz, por que começar as
buscas pela raiz? Com representações clássicas de 2d-trees,
não existe outra forma de acessar uma folha. Existem 2d-
trees, porém, que permitem acessar em tempo constante
qualquer nó, dado sua posição e seu nível. Para o algoritmo
de busca, a posição é conhecida, mas o nível
não. Para estimar o nível de um nó qualquer, um método de
otimização estatística do custo médio das buscas é
proposto. Como os piores custos de busca são obtidos quando
se começa da raiz, este método melhora ambos: o consumo de
memória pelo uso de 2d-trees que permitem acessar em
tempo constante qualquer nó, e o tempo de execução através
da otimização proposta. / [en] This work emerged from the following observation: usual
search procedures for 2d-trees start from the root to
retrieve the data stored at the leaves. But since the
leaves are the farthest nodes to the root, why
start from the root? With usual 2d-trees representations,
there is no other way to access a leaf. However, there
exist 2d-trees which allow accessing any node in constant
time, given its position in space and its depth in the
2d-tree. Search procedures take the position as an input,
but the depth remains unknown. To estimate the depth of an
arbitrary node a statistical optimization of the average
cost for the search procedures is introduced. Since the
highest costs of these algorithms are obtained when
starting from the root, this method improves on both, the
memory footprint by the use of 2d-trees which allow
accessing any node in constant time, and execution
time through the proposed optimization.
|
173 |
Desenvolvimento e análise de um digitalizador câmera-projetor de alta definição para captura de geometria e fotometriaSilva, Roger Correia Pinheiro 26 August 2011 (has links)
Submitted by Renata Lopes (renatasil82@gmail.com) on 2017-03-02T14:44:36Z
No. of bitstreams: 1
rogercorreiapinheirosilva.pdf: 22838442 bytes, checksum: 0bd115f462fc7572058a542e9ed91fcc (MD5) / Approved for entry into archive by Adriana Oliveira (adriana.oliveira@ufjf.edu.br) on 2017-03-06T19:52:42Z (GMT) No. of bitstreams: 1
rogercorreiapinheirosilva.pdf: 22838442 bytes, checksum: 0bd115f462fc7572058a542e9ed91fcc (MD5) / Made available in DSpace on 2017-03-06T19:52:42Z (GMT). No. of bitstreams: 1
rogercorreiapinheirosilva.pdf: 22838442 bytes, checksum: 0bd115f462fc7572058a542e9ed91fcc (MD5)
Previous issue date: 2011-08-26 / CAPES - Coordenação de Aperfeiçoamento de Pessoal de Nível Superior / Um sistema câmera-projetor é capaz de capturar informação geométrica tridimensional
de objetos e ambientes do mundo real. A captura de geometria em tal sistema baseia-se
na projeção de luz estruturada sobre um objeto através do projetor, e na captura da
cena modulada através da câmera. Com o sistema previamente calibrado, a deformação
da luz projetada causada pelo objeto fornece a informação necessária para reconstruir a
geometria do mesmo por meio de triangulação.
Este trabalho descreve o desenvolvimento de um digitalizador câmera-projetor de alta
definição (com resoluções de até 1920x1080 e 1280x720); são detalhadas as etapas e processos
que conduzem à reconstrução de geometria, como calibração câmera-projetor, calibração de cores, processamento da imagem capturada e triangulação. O digitalizador
desenvolvido utiliza a codificação de luz estruturada (b; s)-BCSL, que emprega a projeção
de uma sequência de faixas verticais coloridas sobre a cena. Este esquema de codificação
flexível oferece um número variado de faixas para projeção: quanto maior o número de
faixas, mais detalhada a geometria capturada.
Um dos objetivos deste trabalho é estimar o número limite de faixas (b,s)-BCSL possível dentro das resoluções atuais de vídeo de alta definição. Este número limite é aquele
que provê reconstrução densa da geometria alvo, e ao mesmo tempo possui baixo nível
de erro. Para avaliar a geometria reconstruída pelo digitalizador para os diversos números
de faixas, é proposto um protocolo para avaliação de erro. O protocolo desenvolvido
utiliza planos como objetos para mensurar a qualidade de reconstrução geométrica. A
partir da nuvem de pontos gerada pelo digitalizador, a equação do plano para a mesma é
estimada por meio de mínimos quadrados. Para um número fixo de faixas, são feitas cinco
digitalizações independentes do plano: cada digitalização leva a uma equação; também é
computado o plano médio, estimado a partir da união das cinco nuvens de pontos. Uma
métrica de distância no espaço projetivo é usada para avaliar a precisão e a acurácia de
cada número de faixas projetados.
Além da avaliação quantitativa, a geometria de vários objetos é apresentada para uma
avaliação qualitativa. Os resultados demonstram que a quantidade de faixas limite para
vídeos de alta resolução permite uma grande densidade de pontos mesmo em superfícies
com alta variação de cores. / A camera-projector system is capable of capturing three-dimensional geometric information
of objects and real-world environments. The capture of geometry in such system
is based on the projection of structured light over an object by the projector, and the
capture of the modulated scene through the camera. With a calibrated system, the deformation
of the projected light caused by the object provides the information needed to
reconstruct its geometry through triangulation.
The present work describes the development of a high definition camera-projector system
(with resolutions up to 1920x1080 and 1280x720). The steps and processes that lead
to the reconstruction of geometry, such as camera-projector calibration, color calibration,
image processing and triangulation, are detailed. The developed scanner uses the
(b; s)-BCSL structured light coding, which employs the projection of a sequence of colored
vertical stripes on the scene. This coding scheme offers a flexible number of stripes for
projection: the higher the number of stripes, more detailed is the captured geometry.
One of the objectives of this work is to estimate the limit number of (b; s)-BCSL
stripes possible within the current resolutions of high definition video. This limit number
is the one that provides dense geometry reconstruction, and at the same has low error. To
evaluate the geometry reconstructed by the scanner for a different number of stripes, we
propose a protocol for error measurement. The developed protocol uses planes as objects
to measure the quality of geometric reconstruction. From the point cloud generated by
the scanner, the equation for the same plane is estimated by least squares. For a fixed
number of stripes, five independent scans are made for the plane: each scan leads to one
equation; the median plane, estimated from the union of the five clouds of points, is also
computed. A distance metric in the projective space is used to evaluate the precision and
the accuracy of each number of projected stripes.
In addition to the quantitative evaluation, the geometry of many objects are presented
for qualitative evaluation. The results show that the limit number of stripes for high
resolution video allows high density of points even on surfaces with high color variation.
|
174 |
A Shape-based weighting strategy applied to the covariance estimation on the ICPYamada, Fernando Akio de Araujo 15 March 2016 (has links)
Submitted by Renata Lopes (renatasil82@gmail.com) on 2017-06-07T17:49:03Z
No. of bitstreams: 1
fernandoakiodearaujoyamada.pdf: 21095203 bytes, checksum: 1842e801a538bdeef0368c963b9d98b7 (MD5) / Approved for entry into archive by Adriana Oliveira (adriana.oliveira@ufjf.edu.br) on 2017-06-24T13:47:22Z (GMT) No. of bitstreams: 1
fernandoakiodearaujoyamada.pdf: 21095203 bytes, checksum: 1842e801a538bdeef0368c963b9d98b7 (MD5) / Made available in DSpace on 2017-06-24T13:47:22Z (GMT). No. of bitstreams: 1
fernandoakiodearaujoyamada.pdf: 21095203 bytes, checksum: 1842e801a538bdeef0368c963b9d98b7 (MD5)
Previous issue date: 2016-03-15 / CAPES - Coordenação de Aperfeiçoamento de Pessoal de Nível Superior / No problema de registro rígido por pares é preciso encontrar uma transformação rígida que alinha duas nuvens de pontos. A sulução clássica e mais comum é o algoritmo Iterative Closest Point (ICP). No entanto, o ICP e muitas de suas variantes requerem que as nuvens de pontos já estejam grosseiramente alinhadas. Este trabalho apresenta um método denominado Shape-based Weighting Covariance Iterative Closest Point (SWC-ICP), uma melhoria do ICP clássico. A abordagem proposta aumenta a possibilidade de alinhar corretamente duas nuvens de pontos, independente da pose inicial, mesmo quando existe apenas sobreposição parcial entre elas, ou na presença de ruído e outliers. Ela se beneficia da geometria local dos pontos, codificada em tensores de orientação de segunda ordem, para prover um segundo conjunto de correspondências para o ICP. A matriz de covariância cruzada computada a partir deste conjunto é combinada com a matriz de covariância cruzada usual, seguindo uma estratégia heurística. Para comparar o método proposto com algumas abordagens recentes, um protocolo de avaliação detalhado para registro rígido é apresentado. Os resultados mostram que o SWC-ICP está entre os melhores métodos comparados, com performance superior em situações de grande deslocamento angular, mesmo na presença de ruído e outliers. / In the pairwise rigid registration problem we need to find a rigid transformation that aligns two point clouds. The classical and most common solution is the Iterative Closest Point (ICP) algorithm. However, the ICP and many of its variants require that the point clouds are already coarsely aligned. We present in this work a method named Shape-based Weighting Covariance Iterative Closest Point (SWC-ICP), an improvement over the classical ICP. Our approach improves the possibility to correctly align two point clouds, regardless of the initial pose, even when there is only a partial overlapping between them, or in the presence of noise and outliers. It benefits from the local geometry of the points, encoded in second-order orientation tensors, to provide a second correspondences set to the ICP. The cross-covariance matrix computed from this set is combined with the usual cross-covariance matrix following a heuristic strategy. In order to compare our method with some recent approaches, we present a detailed evaluation protocol to rigid registration. Results show that the SWC-ICP is among the best methods compared, with superior performance in situations of wide angular displacement, even in situations of noise and outliers.
|
175 |
Variants and Generalization of Some Classical Problems in Combinatorial GeometryBharadwaj, Subramanya B V January 2014 (has links) (PDF)
In this thesis we consider extensions and generalizations of some classical problems
in Combinatorial Geometry. Our work is an offshoot of four classical problems in
Combinatorial Geometry. A fundamental assumption in these problems is that the
underlying point set is R2. Two fundamental themes entwining the problems considered
in this thesis are: “What happens if we assume that the underlying point set is finite?”, “What happens if we assume that the underlying point set has a special structure?”. Let P ⊂ R2 be a finite set of points in general position. It is reasonable to expect that if |P| is large then certain ‘patterns’ in P always occur. One of the first results was the Erd˝os-Szekeres Theorem which showed that there exists a f(n) such that if |P| ≥ f(n) then there exists a convex subset S ⊆ P, |S| = n i.e., a subset which is a convex polygon of size n. A considerable number of such results have been found since.
Avis et al. in 2001 posed the following question which we call the n-interior point
problem: Is there a finite integer g(n) for every n, such that, every point set P with
g(n) interior points has a convex subset S ⊆ P with n interior points. i.e. a subset
which is a convex polygon that contains exactly n interior points. They showed that
g(1) = 1, g(2) = 4. While it is known that g(3) = 9, it is not known whether g(n) exists for n ≥ 4. In the first part of this thesis, we give a positive solution to the n-interior point problem for point sets with bounded number of convex layers.
We say a family of geometric objects C in Rd has the (l, k)-property if every subfamily
C′ ⊆ C of cardinality at most l is k-piercable. Danzer and Gr¨unbaum posed
the following fundamental question which can be considered as a generalised version of
Helly’s theorem: For every positive integer k, does there exist a finite g(k, d) such that if any family of convex objects C in Rd has the (g(k, d), k)-property, then C is k-piercable? Very few results(mostly negative) are known.
Inspired by the original question of Danzer and Gr¨unbaum we consider their question
in an abstract set theoretic setting. Let U be a set (possibly infinite). Let C be a family of subsets of U with the property that if C1, . . . ,Cp+1 ∈ C are p + 1 distinct subsets, then |C1 ∩ · · · ∩Cp+1| ≤ l. In the second part of this thesis, we show in this setting, the first general positive results for the Danzer Grunbaum problem. As an extension, we show polynomial sized kernels for hitting set and covering problems in our setting.
In the third part of this thesis, we broadly look at hitting and covering questions
with respect to points and families of geometric objects in Rd. Let P be a subset of points(possibly infinite) in Rd and C be a collection of subsets of P induced by objects of a given family. For the system (P, C), let νh be the packing number and νc the dual packing number. We consider the problem of bounding the transversal number τ h and the dual transversal number τ c in terms of νh and νc respectively.
These problems has been well studied in the case when P = R2. We systematically
look at the case when P is finite, showing bounds for intervals, halfspaces, orthants,
unit squares, skylines, rectangles, halfspaces in R3 and pseudo disks. We show bounds for rectangles when P = R2.
Given a point set P ⊆ Rd, a family of objects C and a real number 0 < ǫ < 1, the
strong epsilon net problem is to find a minimum sized subset Q ⊆ P such that any
object C ∈ C with the property that |P ∩C| ≥ ǫn is hit by Q. It is customary to express
the bound on the size of the set Q in terms of ǫ.
Let G be a uniform √n × √n grid. It is an intriguing question as to whether we
get significantly better bounds for ǫ-nets if we restrict the underlying point set to be the grid G. In the last part of this thesis we consider the strong epsilon net problem for families of geometric objects like lines and generalized parallelograms, when the underlying point set is the grid G. We also introduce the problem of finding ǫ-nets for arithmetic progressions and give some preliminary bounds.
|
176 |
Extension des méthodes de géométrie algorithmique aux structures fractales / Extension of algorithmic geometry to fractal structuresMishkinis, Anton 27 November 2013 (has links)
La définition de formes par ces procédés itératifs génère des structures avec des propriétésspécifiques intéressantes : rugosité, lacunarité. . . . Cependant, les modèles géométriques classiquesne sont pas adaptés à la description de ces formes.Dans le but de développer un modeleur itératif pour concevoir des objets fractals décrits à l’aide duBCIFS, nous avons développé un ensemble d’outils et d’algorithmes génériques qui nous permettentd’évaluer, de caractériser et d’analyser les différentes propriétés géométriques (la localisation, lecalcul de l’enveloppe convexe, de la distance à partir d’un point, etc) de fractals. Nous avons identifiéles propriétés des opérations standards (intersection, union, offset, . . . ) permettant de calculer uneapproximation d’image des fractales et de plus d’optimiser ces algorithmes d’approximation.Dans certains cas, il est possible de construire un CIFS avec l’opérateur de HUTCHINSON généralisédont l’attracteur est suffisamment proche du résultat de l’opération par rapport à la métrique deHausdorff. Nous avons développé un algorithme générique pour calculer ces CIFS pour une précisiondonnée. Nous avons défini la propriété d’auto-similarité de l’opération, qui définie un ensemble detransformations utilisé dans un système itératif résultant.Pour construire un CIFS exact de l’image, si il existe, il faut prouver tous les similitudes nécessairesmanuellement. Nous explicitons également la condition de l’opération, quand le résultat peut êtrereprésenté par un IFS avec un opérateur de HUTCHINSON généralisé. Dans ce cas, il n’est que cettecondition à prouver manuellement / Defining shapes by iteration allows us to generate new structures with specific properties (roughness,lacunarity), which cannot be achieved with classic modelling.For developing an iterative modeller to design fractals described by a BCIFS, we developed a set oftools and algorithms that permits one to evaluate, to characterize and to analyse different geometricproperties (localisation, convex hull, volume, fractal dimension) of fractals. We identified properties ofstandard CAD operations (intersection, union, offset, . . . ) allowing us to approximate them for fractalsand also to optimize these approximation algorithms.In some cases, it is possible to construct a CIFS with generalised HUTCHINSON operator, whoseattractor is close enough to the operation result with respect to the HAUSDORFF metric.We introduceda generic algorithm to compute such CIFS for a given accuracy.We defined the self-similarity propertyof the operation defining a set of transformations, which are used in the output iterative system.In order to construct an exact CIFS of the image, if it exists, we must prove all the necessarysimilarities manually. We explicit also the condition of the operation to be represented by an IFS witha generalised HUTCHINSON operator. In this case, only this condition should be proved manually
|
177 |
Module Grobner Bases Over Fields With ValuationSen, Aritra 01 1900 (has links) (PDF)
Tropical geometry is an area of mathematics that interfaces algebraic geometry and combinatorics. The main object of study in tropical geometry is the tropical variety, which is the combinatorial counterpart of a classical variety. A classical variety is converted into a tropical variety by a process called tropicalization, thus reducing the problems of algebraic geometry to problems of combinatorics. This new tropical variety encodes several useful information about the original variety, for example an algebraic variety and its tropical counterpart have the same dimension.
In this thesis, we look at the some of the computational aspects of tropical algebraic geometry. We study a generalization of Grobner basis theory of modules which unlike the standard Grobner basis also takes the valuation of coefficients into account. This was rst introduced in (Maclagan & Sturmfels, 2009) in the settings of polynomial rings and its computational aspects were first studied in (Chan & Maclagan, 2013) for the polynomial ring case. The motivation for this comes from tropical geometry as it can be used to compute tropicalization of varieties. We further generalize this to the case of modules. But apart from that it has many other computational advantages. For example, in the standard case the size of the initial submodule generally grows with the increase in degree of the generators. But in this case, we give an example of a family of submodules where the size of the initial submodule remains constant. We also develop an algorithm for computation of Grobner basis of submodules of modules over Z=p`Z[x1; : : : ; xn] that works for any weight vector.
We also look at some of the important applications of this new theory. We show how this can be useful in efficiently solving the submodule membership problem. We also study the computation of Hilbert polynomials, syzygies and free resolutions.
|
178 |
Similarity between Scalar FieldsNarayanan, Vidya January 2016 (has links) (PDF)
Scientific phenomena are often studied through collections of related scalar fields such as data generated by simulation experiments that are parameter or time dependent . Exploration of such data requires robust measures to compare them in a feature aware and intuitive manner.
Topological data analysis is a growing area that has had success in analyzing and visualizing scalar fields in a feature aware manner based on the topological features. Various data structures such as contour and merge trees, Morse-Smale complexes and extremum graphs have been developed to study scalar fields. The extremum graph is a topological data structure based on either the maxima or the minima of a scalar field. It preserves local geometrical structure by maintaining relative locations of extrema and their neighborhoods. It provides a suitable abstraction to study a collection of datasets where features are expressed by descending or ascending manifolds and their proximity is of importance.
In this thesis, we design a measure to understand the similarity between scalar fields based on the extremum graph abstraction. We propose a topological structure called the complete extremum graph and define a distance measure on it that compares scalar fields in a feature aware manner. We design an algorithm for computing the distance and show its applications in analyzing time varying data such as understanding periodicity, feature correspondence and tracking, and identifying key frames.
|
179 |
Maillage dynamique tridimensionnel pour la simulation de l'écoulement dans un bassin sédimentaire / Three dimensional dynamic meshing for hydrocarbons flow simulation in sedimentary basinYahiaoui, Brahim 17 December 2013 (has links)
Afin de cibler une meilleure rentabilité des gisements d'hydrocarbures, la simulation numérique présente un intérêt grandissant dans le secteur pétrolier. Dans ce contexte, deux principaux types d'applications sont distingués : l'ingénierie du réservoir, où les modèles géologiques sont définis comme statiques et l'exploration des gisements par la simulation de la formation des hydrocarbures. Cette dernière application nécessite des modèles de bassins dynamiques. Pour ce type de simulation, la modélisation mathématique et numérique a connu une avancée importante durant les dernières années. En revanche, la construction de maillages indispensables pour les simulations reste une tâche lourde dans le cas de géométries complexes et dynamiques. La difficulté se traduit par la particularité du domaine à mailler, qui est dictée par la mécanique des milieux granulaires, c'est-à-dire des milieux quasi-incompressibles et hétérogènes. De plus, les données sismiques sont des surfaces non-implicites modélisant des blocs volumiques. Dans ce cadre, nous nous intéressons à la génération de maillages lagrangiens hexa-dominants des bassins à géométrie complexe. Le maillage souhaité doit contenir une couche de mailles principalement hexaédriques entre deux horizons et respecter les surfaces failles traversant ces horizons. Une méthodologie originale basée sur la construction d’un espace déplié est proposée. Le maillage souhaité est alors traduit par une grille 3D contrainte dans cet espace. Plusieurs techniques d’optimisation de maillages sont aussi proposées / To target more profitable oil and gas deposits, the numerical simulation is of growing interest in the oil sector. In this context, two main types of applications are distinguished: reservoir engineering where geological models are defined as static and exploration for the simulation of hydrocarbons formation. The latter application requires dynamic basins models. For this type of simulation, mathematical and numerical modeling has been an important advance in recent years. However, the construction of meshes needed for the simulations is a difficult task in the case of complex and dynamic geometries. The difficulty is reflected by the characteristic of the domain to mesh, which is defined from the mechanics of granular media which are almost incompressible and heterogeneous environments. In addition, the seismic data represent non-implicit surfaces modeling volume blocks. In this context, we focus on Lagrangian hex-dominant mesh generation of basins with complex geometry. The desired mesh must contain a layer of almost hexahedral meshes between two horizons and conform to fault surfaces through these horizons. A novel methodology based on the construction of an unfolded space is introduced. The desired mesh is then seen as a constrained 3D grid in this novel space. Several mesh optimization techniques are also proposed
|
180 |
Simplification polyédrique optimale pour le rendu / Optimal polyhedral simplification for renderingCharrier, Emilie 04 December 2009 (has links)
En informatique, les images sont numériques et donc composées de pixels en 2D et de voxels en 3D. Dans une scène virtuelle 3D, il est impossible de manipuler directement les objets comme des ensembles de voxels en raison du trop gros volume de données. Les objets sont alors polyédrisés, c’est-à-dire remplacés par une collection de facettes. Pour ce faire, il est primordial de savoir décider si un sous-ensemble de voxels peut être transformé en une facette dans la représentation polyédrique. Ce problème est appelé reconnaissance de plans discrets. Pour le résoudre, nous mettons en place un nouvel algorithme spécialement adapté pour les ensembles de voxels denses dans une boite englobante. Notre méthode atteint une complexité quasi-linéaire dans ce cas et s’avère efficace en pratique. En parallèle, nous nous intéressons à un problème algorithmique annexe intervenant dans notre méthode de reconnaissance de plans discrets. Il s’agit de calculer les deux enveloppes convexes des points de Z2 contenus dans un domaine vertical borné et situés de part et d’autre d’une droite quelconque. Nous proposons une méthode de complexité optimale et adaptative pour calculer ces enveloppes convexes. Nous présentons le problème de manière détournée : déterminer le nombre rationnel à dénominateur borné qui approxime au mieux un nombre réel donné. Nous établissons le lien entre ce problème numérique et son interprétation géométrique dans le plan. Enfin, nous proposons indépendamment un nouvel algorithme pour calculer l’épaisseur d’un ensemble de points dans le réseau Zd. Notre méthode est optimale en 2D et gloutonne mais efficace en dimension supérieure / In computer science, pictures are digital and so, they are composed of pixels in 2D or of voxels in 3D. In 3D virtual scenes, we cannot directly manipulate objects as sets of voxels because the data are too huge. As a result, the objects are transformed into polyhedra, i.e. collections of facets. For this, we must be able to decide if a subset of voxels can be replaced by a facet in the polyhedrisation. This problem is called digital plane recognition. To solve it, we design a new algorithm especially adapted for sets of voxels which are dense in a bounding box. Our method achieves a quasi-linear worst-case time complexity in this case and it is efficient in practice. In parallel, we study another algorithmic problem which occures in our digital plane recognition algorithm. It is computing the two convex hulls of grid points lying in a bounded vertical domain and located on either side of a straight line. We propose an optimal time complexity method to compute these convex hulls and which is also output sensitive. We present the problem in a different way : find the rational number of bounded denominator that best approximates a given real number. We establish the link between this numerical problem and geometry. Finally, we independently propose a new algorithm to compute the lattice width of a set of points in Zd. Our method is optimal in 2D and is greedy but efficent in higher dimension
|
Page generated in 0.1037 seconds