Spelling suggestions: "subject:"gauss"" "subject:"jauss""
291 |
Rehaussement et détection des attributs sismiques 3D par techniques avancées d'analyse d'imagesLi, Gengxiang 19 April 2012 (has links) (PDF)
Les Moments ont été largement utilisés dans la reconnaissance de formes et dans le traitement d'image. Dans cette thèse, nous concentrons notre attention sur les 3D moments orthogonaux de Gauss-Hermite, les moments invariants 2D et 3D de Gauss-Hermite, l'algorithme rapide de l'attribut de cohérence et les applications de l'interprétation sismique en utilisant la méthode des moments.Nous étudions les méthodes de suivi automatique d'horizon sismique à partir de moments de Gauss-Hermite en cas de 1D et de 3D. Nous introduisons une approche basée sur une étude multi-échelle des moments invariants. Les résultats expérimentaux montrent que la méthode des moments 3D de Gauss-Hermite est plus performante que les autres algorithmes populaires.Nous avons également abordé l'analyse des faciès sismiques basée sur les caractéristiques du vecteur à partir des moments 3D de Gauss -Hermite, et la méthode de Cartes Auto-organisatrices avec techniques de visualisation de données. L'excellent résultat de l'analyse des faciès montre que l'environnement intégré donne une meilleure performance dans l'interprétation de la structure des clusters.Enfin, nous introduisons le traitement parallèle et la visualisation de volume. En profitant des nouvelles performances par les technologies multi-threading et multi-cœurs dans le traitement et l'interprétation de données sismiques, nous calculons efficacement des attributs sismiques et nous suivons l'horizon. Nous discutons également l'algorithme de rendu de volume basé sur le moteur Open-Scene-Graph qui permet de mieux comprendre la structure de données sismiques.
|
292 |
Testing the compatibility of constraints for parameters of a geodetic adjustment modelLehmann, Rüdiger, Neitzel, Frank 06 August 2014 (has links) (PDF)
Geodetic adjustment models are often set up in a way that the model parameters need to fulfil certain constraints.
The normalized Lagrange multipliers have been used as a measure of the strength of constraint in such a way that
if one of them exceeds in magnitude a certain threshold then the corresponding constraint is likely to be incompatible with
the observations and the rest of the constraints. We show that these and similar measures can be deduced as test statistics of
a likelihood ratio test of the statistical hypothesis that some constraints are incompatible in the same sense. This has been
done before only for special constraints (Teunissen in Optimization and Design of Geodetic Networks, pp. 526–547,
1985). We start from the simplest case, that the full set of constraints is to be tested, and arrive at the advanced case,
that each constraint is to be tested individually. Every test is worked out both for a known as well as for an unknown
prior variance factor. The corresponding distributions under null and alternative hypotheses are derived. The theory is
illustrated by the example of a double levelled line. / Geodätische Ausgleichungsmodelle werden oft auf eine Weise formuliert, bei der die Modellparameter bestimmte Bedingungsgleichungen zu erfüllen haben. Die normierten Lagrange-Multiplikatoren wurden bisher als Maß für den ausgeübten Zwang verwendet, und zwar so, dass wenn einer von ihnen betragsmäßig eine bestimmte Schwelle übersteigt, dann ist davon auszugehen, dass die zugehörige Bedingungsgleichung nicht mit den Beobachtungen und den restlichen Bedingungsgleichungen kompatibel ist. Wir zeigen, dass diese und ähnliche Maße als Teststatistiken eines Likelihood-Quotiententests der statistischen Hypothese, dass einige Bedingungsgleichungen in diesem Sinne inkompatibel sind, abgeleitet werden können. Das wurde bisher nur für spezielle Bedingungsgleichungen getan (Teunissen in Optimization and Design of Geodetic Networks, pp. 526–547, 1985). Wir starten vom einfachsten Fall, dass die gesamte Menge der Bedingungsgleichungen getestet werden muss, und gelangen zu dem fortgeschrittenen Problem, dass jede Bedingungsgleichung individuell zu testen ist. Jeder Test wird sowohl für bekannte, wie auch für unbekannte a priori Varianzfaktoren ausgearbeitet. Die zugehörigen Verteilungen werden sowohl unter der Null- wie auch unter der Alternativhypthese abgeleitet. Die Theorie wird am Beispiel einer Doppelnivellementlinie illustriert.
|
293 |
Tuned and asynchronous stencil kernels for CPU/GPU systemsVenkatasubramanian, Sundaresan 18 May 2009 (has links)
We describe heterogeneous multi-CPU and multi-GPU implementations of Jacobi's iterative method for the 2-D Poisson equation on a structured grid, in both single- and double-precision. Properly tuned, our best implementation achieves 98% of the empirical streaming GPU bandwidth (66% of peak) on a NVIDIA C1060. Motivated to find a still faster implementation, we further consider "wildly asynchronous" implementations that can reduce or even eliminate the synchronization bottleneck between iterations. In these versions, which are based on the principle of a chaotic relaxation (Chazan and Miranker, 1969), we simply remove or delay synchronization between iterations, thereby potentially trading off more flops (via more iterations to converge) for a higher degree of asynchronous parallelism. Our relaxed-synchronization implementations on a GPU can be 1.2-2.5x faster than our best synchronized GPU implementation while achieving the same accuracy. Looking forward, this result suggests research on similarly "fast-and-loose" algorithms in the coming era of increasingly massive concurrency and relatively high synchronization or communication costs.
|
294 |
Medical Image Registration and Stereo Vision Using Mutual InformationFookes, Clinton Brian January 2003 (has links)
Image registration is a fundamental problem that can be found in a diverse range of fields within the research community. It is used in areas such as engineering, science, medicine, robotics, computer vision and image processing, which often require the process of developing a spatial mapping between sets of data. Registration plays a crucial role in the medical imaging field where continual advances in imaging modalities, including MRI, CT and PET, allow the generation of 3D images that explicitly outline detailed in vivo information of not only human anatomy, but also human function. Mutual Information (MI) is a popular entropy-based similarity measure which has found use in a large number of image registration applications. Stemming from information theory, this measure generally outperforms most other intensity-based measures in multimodal applications as it does not assume the existence of any specific relationship between image intensities. It only assumes a statistical dependence. The basic concept behind any approach using MI is to find a transformation, which when applied to an image, will maximise the MI between two images. This thesis presents research using MI in three major topics encompassed by the computer vision and medical imaging field: rigid image registration, stereo vision, and non-rigid image registration. In the rigid domain, a novel gradient-based registration algorithm (MIGH) is proposed that uses Parzen windows to estimate image density functions and Gauss-Hermite quadrature to estimate the image entropies. The use of this quadrature technique provides an effective and efficient way of estimating entropy while bypassing the need to draw a second sample of image intensities (a procedure required in previous Parzen-based MI registration approaches). It is possible to achieve identical results with the MIGH algorithm when compared to current state of the art MI-based techniques. These results are achieved using half the previously required sample sizes, thus doubling the statistical power of the registration algorithm. Furthermore, the MIGH technique improves algorithm complexity by up to an order of N, where N represents the number of samples extracted from the images. In stereo vision, a popular passive method of depth perception, new extensions have been pro- posed in order to increase the robustness of MI-based stereo matching algorithms. Firstly, prior probabilities are incorporated into the MI measure to considerably increase the statistical power of the matching windows. The statistical power, directly related to the number of samples, can become too low when small matching windows are utilised. These priors, which are calculated from the global joint histogram, are tuned to a two level hierarchical approach. A 2D match surface, in which the match score is computed for every possible combination of template and matching windows, is also utilised to enforce left-right consistency and uniqueness constraints. These additions to MI-based stereo matching significantly enhance the algorithms ability to detect correct matches while decreasing computation time and improving the accuracy, particularly when matching across multi-spectra stereo pairs. MI has also recently found use in the non-rigid domain due to a need to compute multimodal non-rigid transformations. The viscous fluid algorithm is perhaps the best method for re- covering large local mis-registrations between two images. However, this model can only be used on images from the same modality as it assumes similar intensity values between images. Consequently, a hybrid MI-Fluid algorithm is proposed to compute a multimodal non-rigid registration technique. MI is incorporated via the use of a block matching procedure to generate a sparse deformation field which drives the viscous fluid algorithm, This algorithm is also compared to two other popular local registration techniques, namely Gaussian convolution and the thin-plate spline warp, and is shown to produce comparable results. An improved block matching procedure is also proposed whereby a Reversible Jump Markov Chain Monte Carlo (RJMCMC) sampler is used to optimally locate grid points of interest. These grid points have a larger concentration in regions of high information and a lower concentration in regions of small information. Previous methods utilise only a uniform distribution of grid points throughout the image.
|
295 |
Uso da Aplicação Normal de Gauss na poligonização de superfícies implícitas. / Use of the Gauss Normal Application in the polygonization of implicit surfaces.IWANO, Thiciany Matsudo. 06 July 2018 (has links)
Submitted by Johnny Rodrigues (johnnyrodrigues@ufcg.edu.br) on 2018-07-06T13:51:44Z
No. of bitstreams: 1
THICIANY MATSUDO IWANO - DISSERTAÇÃO PPGMAT 2005..pdf: 3751075 bytes, checksum: 2aaae3fdd115cd9f6f4b653f522d94c8 (MD5) / Made available in DSpace on 2018-07-06T13:51:44Z (GMT). No. of bitstreams: 1
THICIANY MATSUDO IWANO - DISSERTAÇÃO PPGMAT 2005..pdf: 3751075 bytes, checksum: 2aaae3fdd115cd9f6f4b653f522d94c8 (MD5)
Previous issue date: 2005-10 / Neste trabalho apresentamos um estudo das principais técnicas de geração de
malhas poligonais, a partir de superfícies descritas matematicamente por funções implícitas,isto é, superfícies definidas pelo conjunto S = f−1(0) = {X ∈ R3 | f(X) = 0}, onde
f : R3 → R e f é, pelo menos, de classe C2. Mostramos um método para obter
as curvaturas gaussiana e média dessas superfícies a partir do vetor ∇f para cada
ponto de S. Abordamos questões como a preservação de características geométricas e
topológicas do objeto gráfico. Dentre os métodos estudados, ressaltamos o algoritmo Marching Triangles, que gera uma malha a partir de um ponto arbitrário p sobre a superfície S e um referencial local, usando a abordagem do avanço de frentes. Em sua implementação, usamos o raio de curvatura, calculado a partir da curvatura normal máxima absoluta da superfície em cada ponto p pertencente a S, para adaptar o comprimento das arestas da malha triangular à geometria local da superfície S / In this work we present a study about the main techniques of surfaces meshes generation, described by implicit functions, that is, surfaces defined by the set S = f−1(0) = {X ∈ R3 | f(X) = 0}, where f : R3 → R and f is, at least, C2. We discuss aspects involving his preservation of graphic object’s geometry and topology. As special method we cite the Marching Triangles that generates a mesh starting from an arbitrary point p on surface S and a local referencial, using advancing fronts approach. In our implementation, we use the radius of curvature, calculated from surface’s absolute maximum normal curvature in each point p in S and the triangular mesh, to adapt the edges length of the mesh to the local geometry.
|
296 |
Uma extensão do teorema de Gauss-Bonnet para superfícies com fins do tipo coneBranco, Flavia Malta January 1999 (has links)
Neste trabalho definimos as superfícies com fins do tipo cone com coeficiente a 2 >/ 0, uma classe de superfícies completas, não compactas e bem comportadas no infinito, e apresentamos uma extensão do Teorema de Gauss-Bonnet para estas superfícies com coeficiente a > O. / In this work we define a-conical type end surfaces, a 2 >/ O, a class of complete non compact surfaces having a nice behaviour at infinity, and we present an extension of the Theorem of Gauss-Bonnet for these surfaces such that a> O.
|
297 |
Uma extensão do teorema de Gauss-Bonnet para superfícies com fins do tipo coneBranco, Flavia Malta January 1999 (has links)
Neste trabalho definimos as superfícies com fins do tipo cone com coeficiente a 2 >/ 0, uma classe de superfícies completas, não compactas e bem comportadas no infinito, e apresentamos uma extensão do Teorema de Gauss-Bonnet para estas superfícies com coeficiente a > O. / In this work we define a-conical type end surfaces, a 2 >/ O, a class of complete non compact surfaces having a nice behaviour at infinity, and we present an extension of the Theorem of Gauss-Bonnet for these surfaces such that a> O.
|
298 |
Uma extensão do teorema de Gauss-Bonnet para superfícies com fins do tipo coneBranco, Flavia Malta January 1999 (has links)
Neste trabalho definimos as superfícies com fins do tipo cone com coeficiente a 2 >/ 0, uma classe de superfícies completas, não compactas e bem comportadas no infinito, e apresentamos uma extensão do Teorema de Gauss-Bonnet para estas superfícies com coeficiente a > O. / In this work we define a-conical type end surfaces, a 2 >/ O, a class of complete non compact surfaces having a nice behaviour at infinity, and we present an extension of the Theorem of Gauss-Bonnet for these surfaces such that a> O.
|
299 |
Supersymmetric Quantum Mechanics, Index Theorems and Equivariant CohomologyNguyen, Hans January 2018 (has links)
In this thesis, we investigate supersymmetric quantum mechanics (SUSYQM) and its relation to index theorems and equivariant cohomology. We define some basic constructions on super vector spaces in order to set the language for the rest of the thesis. The path integral in quantum mechanics is reviewed together with some related calculational methods and we give a path integral expression for the Witten index. Thereafter, we discuss the structure of SUSYQM in general. One shows that the Witten index can be taken to be the difference in dimension of the bosonic and fermionic zero energy eigenspaces. In the subsequent section, we derive index theorems. The models investigated are the supersymmetric non-linear sigma models with one or two supercharges. The former produces the index theorem for the spin-complex and the latter the Chern-Gauss-Bonnet Theorem. We then generalise to the case when a group action (by a compact connected Lie group) is included and want to consider the orbit space as the underlying space, in which case equivariant cohomology is introduced. In particular, the Weil and Cartan models are investigated and SUSYQM Lagrangians are derived using the obtained differentials. The goal was to relate this to gauge quantum mechanics, which was unfortunately not successful. However, what was shown was that the Euler characteristics of a closed oriented manifold and its homotopy quotient by U(1)n coincide.
|
300 |
Résolution triangulaire de systèmes linéaires creux de grande taille dans un contexte parallèle multifrontal et hors-mémoire / Parallel triangular solution in the out-of-core multifrontal approach for solving large sparse linear systemsSlavova, Tzvetomila 28 April 2009 (has links)
Nous nous intéressons à la résolution de systèmes linéaires creux de très grande taille par des méthodes directes de factorisation. Dans ce contexte, la taille de la matrice des facteurs constitue un des facteurs limitants principaux pour l'utilisation de méthodes directes de résolution. Nous supposons donc que la matrice des facteurs est de trop grande taille pour être rangée dans la mémoire principale du multiprocesseur et qu'elle a donc été écrite sur les disques locaux (hors-mémoire : OOC) d'une machine multiprocesseurs durant l'étape de factorisation. Nous nous intéressons à l'étude et au développement de techniques efficaces pour la phase de résolution après une factorization multifrontale creuse. La phase de résolution, souvent négligée dans les travaux sur les méthodes directes de résolution directe creuse, constitue alors un point critique de la performance de nombreuses applications scientifiques, souvent même plus critique que l'étape de factorisation. Cette thèse se compose de deux parties. Dans la première partie nous nous proposons des algorithmes pour améliorer la performance de la résolution hors-mémoire. Dans la deuxième partie nous pousuivons ce travail en montrant comment exploiter la nature creuse des seconds membres pour réduire le volume de données accédées en mémoire. Dans la première partie de cette thèse nous introduisons deux approches de lecture des données sur le disque dur. Nous montrons ensuite que dans un environnement parallèle le séquencement des tâches peut fortement influencer la performance. Nous prouvons qu'un ordonnancement contraint des tâches peut être introduit; qu'il n'introduit pas d'interblocage entre processus et qu'il permet d'améliorer les performances. Nous conduisons nos expériences sur des problèmes industriels de grande taille (plus de 8 Millions d'inconnues) et utilisons une version hors-mémoire d'un code multifrontal creux appelé MUMPS (solveur multifrontal parallèle). Dans la deuxième partie de ce travail nous nous intéressons au cas de seconds membres creux multiples. Ce problème apparaît dans des applications en electromagnétisme et en assimilation de données et résulte du besoin de calculer l'espace propre d'une matrice fortement déficiente, du calcul d'éléments de l'inverse de la matrice associée aux équations normales pour les moindres carrés linéaires ou encore du traitement de matrices fortement réductibles en programmation linéaire. Nous décrivons un algorithme efficace de réduction du volume d'Entrées/Sorties sur le disque lors d'une résolution hors-mémoire. Plus généralement nous montrons comment le caractère creux des seconds -membres peut être exploité pour réduire le nombre d'opérations et le nombre d'accès à la mémoire lors de l'étape de résolution. Le travail présenté dans cette thèse a été partiellement financé par le projet SOLSTICE de l'ANR (ANR-06-CIS6-010). / We consider the solution of very large systems of linear equations with direct multifrontal methods. In this context the size of the factors is an important limitation for the use of sparse direct solvers. We will thus assume that the factors have been written on the local disks of our target multiprocessor machine during parallel factorization. Our main focus is the study and the design of efficient approaches for the forward and backward substitution phases after a sparse multifrontal factorization. These phases involve sparse triangular solution and have often been neglected in previous works on sparse direct factorization. In many applications, however, the time for the solution can be the main bottleneck for the performance. This thesis consists of two parts. The focus of the first part is on optimizing the out-of-core performance of the solution phase. The focus of the second part is to further improve the performance by exploiting the sparsity of the right-hand side vectors. In the first part, we describe and compare two approaches to access data from the hard disk. We then show that in a parallel environment the task scheduling can strongly influence the performance. We prove that a constraint ordering of the tasks is possible; it does not introduce any deadlock and it improves the performance. Experiments on large real test problems (more than 8 million unknowns) using an out-of-core version of a sparse multifrontal code called MUMPS (MUltifrontal Massively Parallel Solver) are used to analyse the behaviour of our algorithms. In the second part, we are interested in applications with sparse multiple right-hand sides, particularly those with single nonzero entries. The motivating applications arise in electromagnetism and data assimilation. In such applications, we need either to compute the null space of a highly rank deficient matrix or to compute entries in the inverse of a matrix associated with the normal equations of linear least-squares problems. We cast both of these problems as linear systems with multiple right-hand side vectors, each containing a single nonzero entry. We describe, implement and comment on efficient algorithms to reduce the input-output cost during an outof- core execution. We show how the sparsity of the right-hand side can be exploited to limit both the number of operations and the amount of data accessed. The work presented in this thesis has been partially supported by SOLSTICE ANR project (ANR-06-CIS6-010).
|
Page generated in 0.0363 seconds