• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 197
  • 64
  • 35
  • 10
  • 9
  • 7
  • 6
  • 4
  • 3
  • 2
  • 2
  • 2
  • 2
  • 2
  • 1
  • Tagged with
  • 371
  • 44
  • 40
  • 38
  • 37
  • 35
  • 34
  • 32
  • 30
  • 29
  • 27
  • 25
  • 25
  • 25
  • 24
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
41

Méthodes exactes et approchées par partition en cliques de graphes / Exact and approximation methods by clique partition of graphs

Phan, Raksmey 28 November 2013 (has links)
Cette thèse se déroule au sein du projet ToDo (Time versus Optimality in discrete Optimization ANR 09-EMER-010) financé par l'Agence Nationale de la Recherche. Nous nous intéressons à la résolution exacte et approchée de deux problèmes de graphes. Dans un souci de compromis entre la durée d'exécution et la qualité des solutions, nous proposons une nouvelle approche par partition en cliques qui a pour but (1) de résoudre de manière rapide des problèmes exacts et (2) de garantir la qualité des résultats trouvés par des algorithmes d'approximation. Nous avons combiné notre approche avec des techniques de filtrage et une heuristique de liste. Afin de compléter ces travaux théoriques, nous avons implémenté et comparé nos algorithmes avec ceux existant dans la littérature. Dans un premier temps, nous avons traité le problème de l'indépendant dominant de taille minimum. Nous résolvons de manière exacte ce problème et démontrons qu'il existe des graphes particuliers dans lesquels le problème est 2-approximable. Dans un second temps nous résolvons par un algorithme exact et un algorithme d'approximation le problème du vertex cover et du vertex cover connexe. Puis à la fin de cette thèse, nous avons étendu nos travaux aux problèmes proches, dans des graphes comprenant des conflits entre les sommets. / This thesis takes place in the project ToDo 2 funded by the french National Research Agency. We deal with the resolution of two graph problems, by exact and approximation methods. For the sake of compromise between runtime and quality of the solutions, we propose a new approach by partitioning the vertices of the graph into cliques, which aims (1) to solve problems quickly with exact algortihms and (2) to ensure the quality if results with approximation algorithms. We combine our approach with filtering techniques and heuristic list. To complete this theoretical work, we implement our algorithms and compared with those existing in the literature. At the first step, we discuss the problem of independent dominating of minimum size. We solve this problem accurately and prove that there are special graphs where the problem is 2-approximable. In the second step, we solve by an exact algorithm and an approximation algorithm, the vertex cover problem and the connected vertex cover problem. Then at the end of this thesis, we extend our work to the problems in graphs including conflicts between vertices.
42

Algèbre linéaire exacte, parallèle, adaptative et générique / Adaptive and generic parallel exact linear algebra

Sultan, Ziad 17 June 2016 (has links)
Les décompositions en matrices triangulaires sont une brique de base fondamentale en calcul algébrique. Ils sont utilisés pour résoudre des systèmes linéaires et calculer le rang, le déterminant, l'espace nul ou les profiles de rang en ligne et en colonne d'une matrix. Le projet de cette thèse est de développer des implantations hautes performances parallèles de l'élimination de Gauss exact sur des machines à mémoire partagée.Dans le but d'abstraire le code de l'environnement de calcul parallèle utilisé, un langage dédié PALADIn (Parallel Algebraic Linear Algebra Dedicated Interface) a été implanté et est basé essentiellement sur des macros C/C++. Ce langage permet à l'utilisateur d'écrire un code C++ et tirer partie d’exécutions séquentielles et parallèles sur des architectures à mémoires partagées en utilisant le standard OpenMP et les environnements parallel KAAPI et TBB, ce qui lui permet de bénéficier d'un parallélisme de données et de taches.Plusieurs aspects de l'algèbre linéaire exacte parallèle ont été étudiés. Nous avons construit de façon incrémentale des noyaux parallèles efficaces pour les multiplication de matrice, la résolution de systèmes triangulaires au dessus duquel plusieurs variantes de l'algorithme de décomposition PLUQ sont construites. Nous étudions la parallélisation de ces noyaux en utilisant plusieurs variantes algorithmiques itératives ou récursives et en utilisant des stratégies de découpes variées.Nous proposons un nouvel algorithme récursive de l'élimination de Gauss qui peut calculer simultanément les profiles de rang en ligne et en colonne d'une matrice et de toutes ses sous-matrices principales, tout en étant un algorithme état de l'art de l'élimination de Gauss. Nous étudions aussi les conditions pour qu'un algorithme de l'élimination de Gauss révèle cette information en définissant un nouvel invariant matriciel, la matrice de profil de rang. / Triangular matrix decompositions are fundamental building blocks in computational linear algebra. They are used to solve linear systems, compute the rank, the determinant, the null-space or the row and column rank profiles of a matrix. The project of my PhD thesis is to develop high performance shared memory parallel implementations of exact Gaussian elimination.In order to abstract the computational code from the parallel programming environment, we developed a domain specific language, PALADIn: Parallel Algebraic Linear Algebra Dedicated Interface, that is based on C/C + + macros. This domain specific language allows the user to write C + + code and benefit from sequential and parallel executions on shared memory architectures using the standard OpenMP, TBB and Kaapi parallel runtime systems and thus providing data and task parallelism.Several aspects of parallel exact linear algebra were studied. We incrementally build efficient parallel kernels, for matrix multiplication, triangular system solving, on top of which several variants of PLUQ decomposition algorithm are built. We study the parallelization of these kernels using several algorithmic variants: either iterative or recursive and using different splitting strategies.We propose a recursive Gaussian elimination that can compute simultaneously therow and column rank profiles of a matrix as well as those of all of its leading submatrices, in the same time as state of the art Gaussian elimination algorithms. We also study the conditions making a Gaussian elimination algorithm reveal this information by defining a new matrix invariant, the rank profile matrix.
43

L'indépendant faiblement connexe : études algorithmiques et polyédrales / Weakly connected independent sets : algorithmic and polyhedral studies

Mameri, Djelloul 25 November 2014 (has links)
Dans ce travail, nous nous intéressons à une topologie pour les réseaux de capteurs sans fil. Un réseau de capteurs sans fil peut être modélisé comme un graphe non orienté G = (V,E). Chaque sommet de V représente un capteur et une arête e = {u, v} dans E indique une transmission directe possible entre deux capteurs u et v. Contrairement aux dispositifs filaires, les capteurs sans fil ne sont pas a priori agencé en réseau. Une topologie doit être créée en sélectionnant des noeuds "dominants" qui vont gérer les transmissions. Les architectures qui ont été examinées dans la littérature reposent essentiellement sur les ensembles dominants connexes et les ensembles dominants faiblement connexes. Cette étude est consacrée aux ensembles indépendants faiblement connexes. Un indépendant S ⊂ V est dit faiblement connexe si le graphe GS = (V, [S, V \S]) est connexe, où [S, V \S] est l’ensemble des arêtes e = {u, v} de E avec u ∈ S et v ∈ V \S. Une topologie basée sur les ensembles faiblement connexes permet de partitionner l’ensemble des capteurs en trois groupes, les esclaves, les maîtres et les intermédiaires. Les premiers effectuent les mesures, les seconds rassemblent les données collectées et les troisièmes assurent les communications inter-groupes. Nous donnons d’abord quelques propriétés de cette structure combinatoire lorsque le graphe non orienté G est connexe. Puis nous proposons des résultats de complexité pour le problème de la recherche de l’indépendant faiblement connexe de cardinalité minimale (MWCISP). Nous décrivons également un algorithme d’énumération exact de complexité O∗(1.4655|V |) pour le MWCISP. Des tests numériques de cette procédure exacte sont présentés. Nous formulons ensuite le MWCISP comme un programme linéaire en nombres entiers. Le polytope associé aux solutions de ce problème est complètement caractérisé lorsque G est un cycle impair. Nous étudions des opérations de composition de graphes et leurs conséquences polyédrales. Nous introduisons des inégalités valides notamment les contraintes dites de multibord. Par la suite, nous développons un algorithme de coupes et branchement sous CPLEX pour résoudre ce problème en utilisant des heuristiques pour la séparation de nos familles de contraintes. Des résultats expérimentaux de ce programme sont exposés. / In this work, we focus on a topology for Wireless Sensor Networks (WSN). A wireless sensor network can be modeled as an undirected graph G = (V,E). Each vertex of V represents a sensor and an edge e = {u, v} in E implies a direct transmission between the two sensors u and v. Unlike wired devices, wireless sensors are not a priori arranged in a network. Topology should be made by selecting some sensor as dominators nodes who manage transmissions. Architectures that have been studied in the literature are mainly based on connected dominating sets and weakly connected dominating sets.This study is devoted to weakly connected independent sets. An independent set S ⊂ V is said Weakly Connected if the graph GS = (V, [S, V \S]) is connected, where [S, V \S] is the set of edges with exactly one end in S. A sensor network topology based on weakly connected sets is partition into three groups, slaves, masters and bridges. The first performs the measurements, the second gathers the collected data and the later provides the inter-group communications. We first give some properties of this combinatorial structure when the undirected graph G is connected. Then we provide complexity results for the problem of finding the minimum weakly connected independent set problem (MWCISP). We also describe an exact enumeration algorithm of complexity O∗(1.4655|V |) (for the (MWCISP)). Numerical tests of this exact procedure are also presented. We then present an integer programming formulation for the minimum weakly connected independent set problem and discuss its associated polytope. Some classical graph operations are also used for defining new polyhedra from pieces. We give valid inequalities and describe heuristical separation algorithms for them. Finally, we develop a branch-and-cut algorithm and test it on two classes of graphs.
44

Současné možnosti řešení správy a oběhu dokumentů ve firmě / Current possible arrangements for a company's document management and circulation

Kučerová Zrálíková, Václava January 2011 (has links)
The thesis analyzes the area of enterprise content management (ECM), identifies its relation to other business applications and defines various stages of the life cycle of documents. It also describes individual components of ECM, including the technologies covering these components. A tool for managing and storing documents (DMS) in ABC company is presented in more detail in the practical part. The Exact Synergy Enterprise software demonstrates its advantages and disadvantages affecting document management and operational problems that this kind of software may carry.
45

Examining spatial arbitrage: Effect of electronic commerce and arbitrageur strategies

Subramanian, Hemang C. 07 January 2016 (has links)
Markets increase social welfare by matching willing buyers and sellers. It is important to understand whether markets are fulfilling their societal purpose and are operating efficiently. The prevalence of spatial arbitrage in markets is an important indicator of market efficiency. The two essays in my dissertation study spatial arbitrage and the behaviors of arbitrageurs Electronic commerce can improve market efficiency by helping buyers and sellers find and transact with each other across geographic distance. In the first essay, we study the effect of two distinct forms of electronic commerce on market efficiency, which we measure via the prevalence of spatial arbitrage. Spatial arbitrage is a more precise measure than price dispersion, which is typically used, because it accounts for the transaction costs of trading across distance and for unobserved product heterogeneity. Studying two forms of electronic commerce allows us to examine how the theoretical mechanisms of expanded reach and transaction immediacy affect market efficiency. We find that electronic commerce reduces the number of arbitrage opportunities but improves arbitrageur’s ability to identify and exploit those that remain. Overall, our results provide a novel and nuanced understanding of how electronic commerce improves market efficiency. Studying arbitrageur strategies will help us understand how arbitrageur behaviors impact markets by increasing/reducing spatial arbitrage. In the second essay, we study specialization strategies of arbitrageurs. Arbitrageurs specialize on asset type and sourcing locations. We investigate the role of specialization and find that specialization affects both arbitrage profits and arbitrage intensity. Subsequently, we find that specialization strategies evolve over time and different groups of arbitrageurs adapt differently based on behavioral biases and environmental factors. Overall, our findings support the predictions of the adaptive markets hypothesis and help us understand antecedents such as capital, arbitrage intensity, etc. which affect the evolution of arbitrageur strategy.
46

Exact sampling and optimisation in statistical machine translation

Aziz, Wilker Ferreira January 2014 (has links)
In Statistical Machine Translation (SMT), inference needs to be performed over a high-complexity discrete distribution de ned by the intersection between a translation hypergraph and a target language model. This distribution is too complex to be represented exactly and one typically resorts to approximation techniques either to perform optimisation { the task of searching for the optimum translation { or sampling { the task of nding a subset of translations that is statistically representative of the goal distribution. Beam-search is an example of an approximate optimisation technique, where maximisation is performed over a heuristically pruned representation of the goal distribution. For inference tasks other than optimisation, rather than nding a single optimum, one is really interested in obtaining a set of probabilistic samples from the distribution. This is the case in training where one wishes to obtain unbiased estimates of expectations in order to t the parameters of a model. Samples are also necessary in consensus decoding where one chooses from a sample of likely translations the one that minimises a loss function. Due to the additional computational challenges posed by sampling, n-best lists, a by-product of optimisation, are typically used as a biased approximation to true probabilistic samples. A more direct procedure is to attempt to directly draw samples from the underlying distribution rather than rely on n-best list approximations. Markov Chain Monte Carlo (MCMC) methods, such as Gibbs sampling, o er a way to overcome the tractability issues in sampling, however their convergence properties are hard to assess. That is, it is di cult to know when, if ever, an MCMC sampler is producing samples that are compatible iii with the goal distribution. Rejection sampling, a Monte Carlo (MC) method, is more fundamental and natural, it o ers strong guarantees, such as unbiased samples, but is typically hard to design for distributions of the kind addressed in SMT, rendering an intractable method. A recent technique that stresses a uni ed view between the two types of inference tasks discussed here | optimisation and sampling | is the OS approach. OS can be seen as a cross between Adaptive Rejection Sampling (an MC method) and A optimisation. In this view the intractable goal distribution is upperbounded by a simpler (thus tractable) proxy distribution, which is then incrementally re ned to be closer to the goal until the maximum is found, or until the sampling performance exceeds a certain level. This thesis introduces an approach to exact optimisation and exact sampling in SMT by addressing the tractability issues associated with the intersection between the translation hypergraph and the language model. The two forms of inference are handled in a uni ed framework based on the OS approach. In short, an intractable goal distribution, over which one wishes to perform inference, is upperbounded by tractable proposal distributions. A proposal represents a relaxed version of the complete space of weighted translation derivations, where relaxation happens with respect to the incorporation of the language model. These proposals give an optimistic view on the true model and allow for easier and faster search using standard dynamic programming techniques. In the OS approach, such proposals are used to perform a form of adaptive rejection sampling. In rejection sampling, samples are drawn from a proposal distribution and accepted or rejected as a function of the mismatch between the proposal and the goal. The technique is adaptive in that rejected samples are used to motivate a re nement of the upperbound proposal that brings it closer to the goal, improving the rate of acceptance. Optimisation can be connected to an extreme form of sampling, thus the framework introduced here suits both exact optimisation and exact iv sampling. Exact optimisation means that the global maximum is found with a certi cate of optimality. Exact sampling means that unbiased samples are independently drawn from the goal distribution. We show that by using this approach exact inference is feasible using only a fraction of the time and space that would be required by a full intersection, without recourse to pruning techniques that only provide approximate solutions. We also show that the vast majority of the entries (n-grams) in a language model can be summarised by shorter and optimistic entries. This means that the computational complexity of our approach is less sensitive to the order of the language model distribution than a full intersection would be. Particularly in the case of sampling, we show that it is possible to draw exact samples compatible with distributions which incorporate a high-order language model component from proxy distributions that are much simpler. In this thesis, exact inference is performed in the context of both hierarchical and phrase-based models of translation, the latter characterising a problem that is NP-complete in nature.
47

Polymers in Fractal Disorder

Fricke, Niklas 15 June 2016 (has links) (PDF)
This work presents a numerical investigation of self-avoiding walks (SAWs) on percolation clusters, a canonical model for polymers in disordered media. A new algorithm has been developed allowing exact enumeration of over ten thousand steps. This is an increase of several orders of magnitude compared to previously existing enumeration methods, which allow for barely more than forty steps. Such an increase is achieved by exploiting the fractal structure of critical percolation clusters: they are hierarchically organized into a tree of loosely connected nested regions in which the walks segments are enumerated separately. After the enumeration process, a region is \"decimated\" and behaves in the following effectively as a single point. Since this method only works efficiently near the percolation threshold, a chain-growth Monte Carlo algorithm has also been used. Main focus of the investigations was the asymptotic scaling behavior of the average end-to-end distance as function of the number of steps on critical clusters in different dimensions. Thanks the highly efficient new method, existing estimates of the scaling exponents could be improved substantially. Also investigated were the number of possible chain conformation and the average entropy, which were found to follow an unusual scaling behavior. For concentrations above the percolation threshold the exponent describing the growth of the end-to-end distance turned out to differ from that on regular lattices, defying the prediction of the accepted theory. Finally, SAWs with short range attractions on percolation clusters are discussed. Here, it emerged that there seems to be no temperature-driven collapse transition as the asymptotic scaling behavior of the end-to-end distance even at zero temperature is the same as for athermal SAWs. / Die vorliegenden Arbeit präsentiert eine numerische Studie von selbstvermeidenden Zufallswegen (SAWs) auf Perkolationsclustern, ein kanonisches Modell für Polymere in stark ungeordneten Medien. Hierfür wurde ein neuer Algorithmus entwickelt, welcher es ermöglicht SAWs von mehr als zehntausend Schritten exakt auszuzählen. Dies bedeutet eine Steigerung von mehreren Größenordnungen gegenüber der zuvor existierenden Methode, welche kaum mehr als vierzig Schritte zulässt. Solch eine Steigerung wird erreicht, indem die fraktale Struktur der Perkolationscluster geziehlt ausgenutzt wird: Die Cluster werden hierarchisch in lose verbundene Gebiete unterteilt, innerhalb welcher Wegstücke separat ausgezählt werden können. Nach dem Auszählen wird ein Gebiet \"dezimiert\" und verhält sich während der Behandlung größerer Gebiete effektiv wie ein Gitterpunkt. Da diese neue Methode nur nahe der Perkolationsschwelle funktioniert, wurde zum Erzielen der Ergebnisse zudem ein Kettenwachstums-Monte-Carlo-Algorithmus (PERM) eingesetzt. Untersucht wurde zunächst das asymptotische Skalenverhalten des Abstands der beiden Kettenenden als Funktion der Schrittzahl auf kritischen Clustern in verschiedenen Dimensionen. Dank der neuen hochperformanten Methode konnten die bisherigen Schätzer für den dies beschreibenden Exponenten signifikant verbessert werden. Neben dem Abstand wurde zudem die Anzahl der möglichen Konformationen und die mittlere Entropie angeschaut, für welche ein ungewöhnliches Skalenverhalten gefunden wurde. Für Konzentrationen oberhalb der Perkolationsschwelle wurde festgestellt, dass der Exponent, welcher das Wachstum des Endabstands beschreibt, nicht dem für freie SAWs entspricht, was nach gängiger Lehrmeinung der Fall sein sollte. Schlussendlich wurden SAWs mit Anziehung zwischen benachbarten Monomeren untersucht. Hier zeigte sich, dass es auf kritischen Perkolationsclustern keinen Phasenübergang zu geben scheint, an welchem die Ketten kollabieren, sondern dass das Skalenverhalten des Endabstands selbst am absoluten Nullpunkt der Temperatur unverändert ist.
48

Efficient Exact Tests in Linear Mixed Models for Longitudinal Microbiome Studies

Zhai, Jing January 2016 (has links)
Microbiome plays an important role in human health. The analysis of association between microbiome and clinical outcome has become an active direction in biostatistics research. Testing the microbiome effect on clinical phenotypes directly using operational taxonomic unit abundance data is a challenging problem due to the high dimensionality, non-normality and phylogenetic structure of the data. Most of the studies only focus on describing the change of microbe population that occur in patients who have the specific clinical condition. Instead, a statistical strategy utilizing distance-based or similarity-based non-parametric testing, in which a distance or similarity measure is defined between any two microbiome samples, is developed to assess association between microbiome composition and outcomes of interest. Despite the improvements, this test is still not easily interpretable and not able to adjust for potential covariates. A novel approach, kernel-based semi-parametric regression framework, is applied in evaluating the association while controlling the covariates. The framework utilizes a kernel function which is a measure of similarity between samples' microbiome compositions and characterizes the relationship between the microbiome and the outcome of interest. This kernel-based regression model, however, cannot be applied in longitudinal studies since it could not model the correlation between the repeated measurements. We proposed microbiome association exact tests (MAETs) in linear mixed model can deal with longitudinal microbiome data. MAETs can test not only the effect of overall microbiome but also the effect from specific cluster of the OTUs while controlling for others by introducing more random effects in the model. The current methods for multiple variance component testing are based on either asymptotic distribution or parametric bootstrap which require large sample size or high computational cost. The exact (R)LRT tests, an computational efficient and powerful testing methodology, was derived by Crainiceanu. Since the exact (R)LRT can only be used in testing one variance component, we proposed an approach that combines the recent development of exact (R)LRT and a strategy for simplifying linear mixed model with multiple variance components to a single case. The Monte Carlo simulation studies present correctly controlled type I error and provided superior power in testing association between microbiome and outcomes in longitudinal studies. Finally, the MAETs were applied to longitudinal pulmonary microbiome datasets to demonstrate that microbiome composition is associated with lung function and immunological outcomes. We also successfully found two interesting genera Prevotella and Veillonella which are associated with forced vital capacity.
49

Exact Markov chain Monte Carlo and Bayesian linear regression

Bentley, Jason Phillip January 2009 (has links)
In this work we investigate the use of perfect sampling methods within the context of Bayesian linear regression. We focus on inference problems related to the marginal posterior model probabilities. Model averaged inference for the response and Bayesian variable selection are considered. Perfect sampling is an alternate form of Markov chain Monte Carlo that generates exact sample points from the posterior of interest. This approach removes the need for burn-in assessment faced by traditional MCMC methods. For model averaged inference, we find the monotone Gibbs coupling from the past (CFTP) algorithm is the preferred choice. This requires the predictor matrix be orthogonal, preventing variable selection, but allowing model averaging for prediction of the response. Exploring choices of priors for the parameters in the Bayesian linear model, we investigate sufficiency for monotonicity assuming Gaussian errors. We discover that a number of other sufficient conditions exist, besides an orthogonal predictor matrix, for the construction of a monotone Gibbs Markov chain. Requiring an orthogonal predictor matrix, we investigate new methods of orthogonalizing the original predictor matrix. We find that a new method using the modified Gram-Schmidt orthogonalization procedure performs comparably with existing transformation methods, such as generalized principal components. Accounting for the effect of using an orthogonal predictor matrix, we discover that inference using model averaging for in-sample prediction of the response is comparable between the original and orthogonal predictor matrix. The Gibbs sampler is then investigated for sampling when using the original predictor matrix and the orthogonal predictor matrix. We find that a hybrid method, using a standard Gibbs sampler on the orthogonal space in conjunction with the monotone CFTP Gibbs sampler, provides the fastest computation and convergence to the posterior distribution. We conclude the hybrid approach should be used when the monotone Gibbs CFTP sampler becomes impractical, due to large backwards coupling times. We demonstrate large backwards coupling times occur when the sample size is close to the number of predictors, or when hyper-parameter choices increase model competition. The monotone Gibbs CFTP sampler should be taken advantage of when the backwards coupling time is small. For the problem of variable selection we turn to the exact version of the independent Metropolis-Hastings (IMH) algorithm. We reiterate the notion that the exact IMH sampler is redundant, being a needlessly complicated rejection sampler. We then determine a rejection sampler is feasible for variable selection when the sample size is close to the number of predictors and using Zellner’s prior with a small value for the hyper-parameter c. Finally, we use the example of simulating from the posterior of c conditional on a model to demonstrate how the use of an exact IMH view-point clarifies how the rejection sampler can be adapted to improve efficiency.
50

Analysis and Visualization of Exact Solutions to Einstein's Field Equations

Abdelqader, Majd 02 October 2013 (has links)
Einstein's field equations are extremely difficult to solve, and when solved, the solutions are even harder to understand. In this thesis, two analysis tools are developed to explore and visualize the curvature of spacetimes. The first tool is based on a thorough examination of observer independent curvature invariants constructed from different contractions of the Riemann curvature tensor. These invariants are analyzed through their gradient fields, and attention is given to the resulting flow and critical points. Furthermore, we propose a Newtonian analog to some general relativistic invariants based on the underlying physical meaning of these invariants, where they represent the cumulative tidal and frame-dragging effects of the spacetime. This provides us with a novel and intuitive tool to compare Newtonian gravitational fields to exact solutions of Einstein's field equations on equal footing. We analyze the obscure Curzon-Chazy solution using the new approach, and reveal rich structure that resembles the Newtonian gravitational field of a non-rotating ring, as it has been suspected for decades. Next, we examine the important Kerr solution, which describes the gravitational field of rotating black holes. We discover that the observable part of the geometry outside the black hole's event horizon depends significantly on its angular momentum. The fields representing the cumulative tidal and frame-dragging forces change qualitatively at seven specific values of the dimensionless spin parameter of the black hole. The second tool we develop in this thesis is the accurate construction of the Penrose conformal diagrams. These diagrams are a valuable tool to explore the causal structure of spacetimes, where the entire spacetime is compactified to a finite size, and the coordinate choice is fixed such that light rays are straight lines on the diagram. However, for most spacetimes these diagrams can only be constructed as a qualitative guess, since their null geodesics cannot be solved. We developed an algorithm to construct very accurate Penrose diagrams based on numeric solutions to the null geodesics, and applied it to the McVittie metric. These diagrams confirmed the long held suspicion that this spacetime does indeed describe a black hole embedded in an isotropic universe. / Thesis (Ph.D, Physics, Engineering Physics and Astronomy) -- Queen's University, 2013-09-30 14:02:55.865

Page generated in 0.0861 seconds