• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 72
  • 67
  • 21
  • 10
  • 2
  • Tagged with
  • 203
  • 141
  • 136
  • 127
  • 127
  • 124
  • 122
  • 122
  • 63
  • 57
  • 41
  • 40
  • 27
  • 26
  • 25
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
31

Regge Calculus as a Numerical Approach to General Relativity

Khavari, Parandis 17 January 2012 (has links)
A (3+1)-evolutionary method in the framework of Regge Calculus, known as "Parallelisable Implicit Evolutionary Scheme", is analysed and revised so that it accounts for causality. Furthermore, the ambiguities associated with the notion of time in this evolutionary scheme are addressed and a solution to resolving such ambiguities is presented. The revised algorithm is then numerically tested and shown to produce the desirable results and indeed to resolve a problem previously faced upon implementing this scheme. An important issue that has been overlooked in "Parallelisable Implicit Evolutionary Scheme" was the restrictions on the choice of edge lengths used to build the space-time lattice as it evolves in time. It is essential to know what inequalities must hold between the edges of a 4-dimensional simplex, used to construct a space-time, so that the geometry inside the simplex is Minkowskian. The only known inequality on the Minkowski plane is the "Reverse Triangle Inequality" which holds between the edges of a triangle constructed only from space-like edges. However, a triangle, on the Minkowski plane, can be built from a combination of time-like, space-like or null edges. Part of this thesis is concerned with deriving a number of inequalities that must hold between the edges of mixed triangles. Finally, the Raychaudhuri equation is considered from the point of view of Regge Calculus. The Raychaudhuri equation plays an important role in many areas of relativistic Physics and Astrophysics, most importantly in the proof of singularity theorems. An analogue to the Raychaudhuri equation in the framework of Regge Calculus is derived. Both (2+1)-dimensional and (3+1)-dimensional cases are considered and analogues for average expansion and shear scalar are found.
32

Observational Studies of Interacting Galaxies and the Development of the Wide Integral Field Infrared Spectrograph

Chou, Chueh-Yi 19 March 2013 (has links)
Interacting galaxies are thought to be the essential building blocks of elliptical galaxies under the hierarchical galaxy formation scenario. The goal of my dissertation is to broaden our understanding of galaxy merger evolution through both observational studies and instrument developments. Observationally, I approach the goal photometrically and spectroscopically. The photometric studies better constrain the number density evolution of wet and dry mergers through five CFHTLS broad band photometry up to z~1. Meanwhile, by comparing the merger and elliptical galaxy mass density function, I discovered that the most massive mergers are not all formed via merging processes, unless the merging timescale is much longer than the expected value. Spectroscopically, the kinematic properties of close pair galaxies were studied to understand how star formation were quenched at z~0.5. I discovered that the number of red-red pairs are rare, which does not support the gravitational quenching mechanism suggested by the hot halo model. In instrumentation, one efficient way to study galaxy mergers is to use the integral field spectroscopic technique, capitalizing its intrinsic capability of obtaining 2-D spectra effectively. However, the currently available integral field spectrographs are inadequate to provide the required combination of integral field size and spectral resolution for merger studies. I, therefore, have developed two optical designs of a wide integral field infrared spectrograph (WIFIS), which I call WIFIS1 and WIFIS2, to satisfy the requirements of merger studies. Both the designs provide an integral field of 12" x 5" on 10-m telescopes (or equivalently 52" x 20" on 2.3-m telescopes). WIFIS1 delivers spectral resolving powers of 5,500 covering each of JHK bands in a single exposure; WIFIS2 does a lower power of 3,000 focusing on a shorter wavebands of zJ and H bands. All the WIFIS2 optical components have either been or being fabricated, and some of the components have been characterized in the laboratory, including its integral field unit, gratings, and mirrors. The expected completion of WIFIS based on WIFIS2 is 2013 summer, which will be followed by WIFIS1-based spectrograph in a few years.
33

Gravitational Lensing and the Maximum Number of Images

Bayer, Johann 26 February 2009 (has links)
Gravitational lensing, initially a phenomenon used as a solid confirmation of General Relativity, has defined itself in the past decade as a standard astrophysical tool. The ability of a lensing system to produce multiple images of a luminous source is one of the aspects of gravitational lensing that is exploited both theoretically and observationally to improve our understanding of the Universe. In this thesis, within the field of multiple imaging we explore the case of maximal lensing, that is, the configurations and conditions under which a set of deflecting masses can produce the maximum number of images of a distant luminous source, as well as a study of the value for this maximum number itself. We study the case of a symmetric distribution of n-1 point-mass lenses at the vertices of a regular polygon of n-1 sides. By the addition of a perturbation in the form of an n-th mass at the center of the polygon it is proven that, as long as the mass is small enough, the system is a maximal lensing configuration that produces 5(n-1) images. Using the explicit value for the upper bound on the central mass that leads to maximal lensing, we illustrate how this result can be used to find and constrain the mass of planets or brown dwarfs in multiple star systems. For the case of more realistic mass distributions, we prove that when a point-mass is replaced with a distributed lens that does not overlap with existing images or lensing objects, an additional image is formed within the distributed mass while positions and numbers of existing images are left unchanged. This is then used to conclude that the maximum number of images that n isolated distributed lenses can produce is 6(n-1)+1. In order to explore the likelihood of observational verification, we analyze the stability properties of the symmetric maximal lensing configurations. Finally, for the cases of n=4, 5, and 6 point-mass lenses, we study asymmetric maximal lensing configurations and compare their stability properties against the symmetric case.
34

Astrometry.net: Automatic Recognition and Calibration of Astronomical Images

Lang, Dustin 03 March 2010 (has links)
We present Astrometry.net, a system for automatically recognizing and astrometrically calibrating astronomical images, using the information in the image pixels alone. The system is based on the geometric hashing approach in computer vision: We use the geometric relationships between low-level features (stars and galaxies), which are relatively indistinctive, to create geometric features that are distinctive enough that we can recognize images that cover less than one-millionth of the area of the sky. The geometric features are used to generate rapidly hypotheses about the location---the pointing, scale, and rotation---of an image on the sky. Each hypothesis is then evaluated in a Bayesian decision theory framework in order to ensure that most correct hypotheses are accepted while false hypotheses are almost never accepted. The feature-matching process is accelerated by using a new fast and space-efficient kd-tree implementation. The Astrometry.net system is available via a web interface, and the software is released under an open-source license. It is being used by hundreds of individual astronomers and several large-scale projects, so we have at least partially achieved our goal of helping ``to organize, annotate and make searchable all the world's astronomical information.''
35

Growth of Planetesimals and the Formation of Debris Disks

Shannon, Andrew 31 August 2012 (has links)
At the edge of the Solar System lies the Kuiper Belt, a ring of leftover planetesimals from the era of planet formation. Collisions between the Kuiper Belt Objects produce dust grains, which absorb and re-radiate stellar radiation. The total amount of stellar radiation so absorbed is perhaps one part in ten million. Analogous to this, Sun-like stars at Sun-like ages commonly have dusty debris disks, which absorb and re-radiate as much as one part in ten thousand of the stellar radiation. We set out to understand this difference. In chapter 1, we outline the relevant observations and give a feel for the relevant physics. In chapter 2, we turn to the extrasolar debris disks. Using disks spanning a wide range of ages, we construct a pseudo-evolution sequence for extrasolar debris disks. We apply a straightforward collision model to this sequence, and find that the brightest disks are a hundred to a thousand times as massive as the Kuiper Belt, which causes the difference in dust luminosity. Current theoretical models of planetesimal growth predict very low efficiency in making large planetesimals, such that the Kuiper Belt should be the typical outcome of Minimum Mass Solar Nebula type disks. These models cannot produce the massive disks we find around other stars. We revisit these models in chapter 3, to understand the origin of this low efficiency. We confirm that these models, which begin with kilometer sized planetesimals, cannot produce the observed extrasolar debris disks. Instead, we propose an alternate model where most mass begins in centimeter sized grains, with some kilometer sized seed planetesimals. In this model, collisional cooling amongst the centimeter grains produces a new growth mode. We show in chapter 4 that this can produce the Kuiper Belt from a belt not much more massive than the Kuiper Belt today. We follow in chapter 5 by showing that this model can also produce the massive planetesimal populations needed to produce extrasolar debris disks.
36

Growth of Planetesimals and the Formation of Debris Disks

Shannon, Andrew 31 August 2012 (has links)
At the edge of the Solar System lies the Kuiper Belt, a ring of leftover planetesimals from the era of planet formation. Collisions between the Kuiper Belt Objects produce dust grains, which absorb and re-radiate stellar radiation. The total amount of stellar radiation so absorbed is perhaps one part in ten million. Analogous to this, Sun-like stars at Sun-like ages commonly have dusty debris disks, which absorb and re-radiate as much as one part in ten thousand of the stellar radiation. We set out to understand this difference. In chapter 1, we outline the relevant observations and give a feel for the relevant physics. In chapter 2, we turn to the extrasolar debris disks. Using disks spanning a wide range of ages, we construct a pseudo-evolution sequence for extrasolar debris disks. We apply a straightforward collision model to this sequence, and find that the brightest disks are a hundred to a thousand times as massive as the Kuiper Belt, which causes the difference in dust luminosity. Current theoretical models of planetesimal growth predict very low efficiency in making large planetesimals, such that the Kuiper Belt should be the typical outcome of Minimum Mass Solar Nebula type disks. These models cannot produce the massive disks we find around other stars. We revisit these models in chapter 3, to understand the origin of this low efficiency. We confirm that these models, which begin with kilometer sized planetesimals, cannot produce the observed extrasolar debris disks. Instead, we propose an alternate model where most mass begins in centimeter sized grains, with some kilometer sized seed planetesimals. In this model, collisional cooling amongst the centimeter grains produces a new growth mode. We show in chapter 4 that this can produce the Kuiper Belt from a belt not much more massive than the Kuiper Belt today. We follow in chapter 5 by showing that this model can also produce the massive planetesimal populations needed to produce extrasolar debris disks.
37

Gravitational Lensing and the Maximum Number of Images

Bayer, Johann 26 February 2009 (has links)
Gravitational lensing, initially a phenomenon used as a solid confirmation of General Relativity, has defined itself in the past decade as a standard astrophysical tool. The ability of a lensing system to produce multiple images of a luminous source is one of the aspects of gravitational lensing that is exploited both theoretically and observationally to improve our understanding of the Universe. In this thesis, within the field of multiple imaging we explore the case of maximal lensing, that is, the configurations and conditions under which a set of deflecting masses can produce the maximum number of images of a distant luminous source, as well as a study of the value for this maximum number itself. We study the case of a symmetric distribution of n-1 point-mass lenses at the vertices of a regular polygon of n-1 sides. By the addition of a perturbation in the form of an n-th mass at the center of the polygon it is proven that, as long as the mass is small enough, the system is a maximal lensing configuration that produces 5(n-1) images. Using the explicit value for the upper bound on the central mass that leads to maximal lensing, we illustrate how this result can be used to find and constrain the mass of planets or brown dwarfs in multiple star systems. For the case of more realistic mass distributions, we prove that when a point-mass is replaced with a distributed lens that does not overlap with existing images or lensing objects, an additional image is formed within the distributed mass while positions and numbers of existing images are left unchanged. This is then used to conclude that the maximum number of images that n isolated distributed lenses can produce is 6(n-1)+1. In order to explore the likelihood of observational verification, we analyze the stability properties of the symmetric maximal lensing configurations. Finally, for the cases of n=4, 5, and 6 point-mass lenses, we study asymmetric maximal lensing configurations and compare their stability properties against the symmetric case.
38

Astrometry.net: Automatic Recognition and Calibration of Astronomical Images

Lang, Dustin 03 March 2010 (has links)
We present Astrometry.net, a system for automatically recognizing and astrometrically calibrating astronomical images, using the information in the image pixels alone. The system is based on the geometric hashing approach in computer vision: We use the geometric relationships between low-level features (stars and galaxies), which are relatively indistinctive, to create geometric features that are distinctive enough that we can recognize images that cover less than one-millionth of the area of the sky. The geometric features are used to generate rapidly hypotheses about the location---the pointing, scale, and rotation---of an image on the sky. Each hypothesis is then evaluated in a Bayesian decision theory framework in order to ensure that most correct hypotheses are accepted while false hypotheses are almost never accepted. The feature-matching process is accelerated by using a new fast and space-efficient kd-tree implementation. The Astrometry.net system is available via a web interface, and the software is released under an open-source license. It is being used by hundreds of individual astronomers and several large-scale projects, so we have at least partially achieved our goal of helping ``to organize, annotate and make searchable all the world's astronomical information.''
39

Measuring the 21cm Power Spectrum from the Epoch of Reionization with the Giant Metrewave Radio Telescope

Paciga, Gregory 14 January 2014 (has links)
The Epoch of Reionization (EoR) is the transitional period in the universe's evolution which starts when the first luminous sources begin to ionize the intergalactic medium for the first time since recombination, and ends when the most of the hydrogen is ionized by about a redshift of 6. Observations of the 21cm emission from hyperfine splitting of the hydrogen atom can carry a wealth of cosmological information from this epoch since the redshifted line can probe the entire volume. The GMRT-EoR experiment is an ongoing effort to make a statistical detection of the power spectrum of 21cm neutral hydrogen emission due to the patchwork of neutral and ionized regions present during the transition. In this work we detail approximately five years of observations at the GMRT, comprising over 900 hours, and an in-depth analysis of about 50 hours which have lead to the first upper limits on the 21cm power spectrum in the range z=8.1 to 9.2. This includes a concentrated radio frequency interference (RFI) mitigation campaign around the GMRT area, a novel method for removing broadband RFI with a singular value decomposition, and calibration with a pulsar as both a phase and polarization calibrator. Preliminary results from 2011 showed a 2-sigma upper limit to the power spectrum of (70 mK)^2. However, we find that foreground removal strategies tend to reduce the cosmological signal significantly, and modeling this signal loss is crucial for interpretation of power spectrum measurements. Using a simulated signal to estimate the transfer function of the real 21cm signal through the foreground removal procedure, we are able to find the optimal level of foreground removal and correct for the signal loss. Using this correction, we report a 2-sigma upper limit of (248 mK)^2 at k=0.5 h/Mpc.
40

Measuring the 21cm Power Spectrum from the Epoch of Reionization with the Giant Metrewave Radio Telescope

Paciga, Gregory 14 January 2014 (has links)
The Epoch of Reionization (EoR) is the transitional period in the universe's evolution which starts when the first luminous sources begin to ionize the intergalactic medium for the first time since recombination, and ends when the most of the hydrogen is ionized by about a redshift of 6. Observations of the 21cm emission from hyperfine splitting of the hydrogen atom can carry a wealth of cosmological information from this epoch since the redshifted line can probe the entire volume. The GMRT-EoR experiment is an ongoing effort to make a statistical detection of the power spectrum of 21cm neutral hydrogen emission due to the patchwork of neutral and ionized regions present during the transition. In this work we detail approximately five years of observations at the GMRT, comprising over 900 hours, and an in-depth analysis of about 50 hours which have lead to the first upper limits on the 21cm power spectrum in the range z=8.1 to 9.2. This includes a concentrated radio frequency interference (RFI) mitigation campaign around the GMRT area, a novel method for removing broadband RFI with a singular value decomposition, and calibration with a pulsar as both a phase and polarization calibrator. Preliminary results from 2011 showed a 2-sigma upper limit to the power spectrum of (70 mK)^2. However, we find that foreground removal strategies tend to reduce the cosmological signal significantly, and modeling this signal loss is crucial for interpretation of power spectrum measurements. Using a simulated signal to estimate the transfer function of the real 21cm signal through the foreground removal procedure, we are able to find the optimal level of foreground removal and correct for the signal loss. Using this correction, we report a 2-sigma upper limit of (248 mK)^2 at k=0.5 h/Mpc.

Page generated in 0.0246 seconds