• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 188
  • Tagged with
  • 189
  • 189
  • 189
  • 10
  • 8
  • 8
  • 8
  • 6
  • 6
  • 6
  • 6
  • 6
  • 6
  • 4
  • 4
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
61

Mass Transfer and Evolution of Compact Binary Stars

Gokhale, Vayujeet 15 February 2007 (has links)
We present a study of key aspects of the evolution of binary stars with emphasis on binaries consisting of two white dwarf stars. The evolution of such systems is driven by the loss of angular momentum by gravitational wave radiation. Effects like mass transfer and other modes of angular momentum loss and redistribution influence the evolutionary fate of the binary, and can lead to a merger, the tidal disruption of one of the components or its survival as a long-lived AM Canum Venaticorum (AM CVn) type system. Our study takes into account some of these effects; like mass loss, tides, accretion disk formation and direct impact accretion. We find that under some circumstances, the tidal coupling between the spin of the components and the orbit of the binary leads to oscillations in the orbital separation and the mass transfer rate. We also find that as compared to previous studies, a larger fraction of the systems should survive to form AM CVn type systems. We also consider systems in which the mass transfer rate exceeds the critical Eddington rate, leading to mass loss from the system. It is possible that some of the lost mass settles into a disk around the binary to form a circumbinary disk. In the second part of the thesis, we present a toy model for disks in general, and find that the coupling of such a circumbinary disk to the binary has a destabilizing effect on the binary.
62

Quantum Dynamics of Loop Quantum Gravity

Han, Muxin 19 March 2007 (has links)
In the last 20 years, loop quantum gravity, a background independent approach to unify general relativity and quantum mechanics, has been widely investigated. The aim of loop quantum gravity is to construct a mathematically rigorous, background independent, nonperturbative quantum theory for the Lorentzian gravitational field on a four-dimensional manifold. In this approach, the principles of quantum mechanics are combined with those of general relativity naturally. Such a combination provides us a picture of "quantum Riemannian geometry", which is discrete at a fundamental scale. In the investigation of quantum dynamics, the classical expressions of constraints are quantized as operators. The quantum evolution is contained in the solutions of the quantum constraint equations. On the other hand, the semi-classical analysis has to be carried out in order to test the semiclassical limit of the quantum dynamics. In this thesis, the structure of the dynamical theory in loop quantum gravity is presented pedagogically. The outline is as follows: first we review the classical formalism of general relativity as a dynamical theory of connections. Then the kinematical Ashtekar-Isham-Lewandowski representation is introduced as a foundation of loop quantum gravity. We discuss the construction of a Hamiltonian constraint operator and the master constraint programme, for both the cases of pure gravity and matter field coupling. Finally, some strategies are discussed concerning testing the semiclassical limit of the quantum dynamics.
63

Computing and Analyzing Gravitational Radiation in Black Hole Simulations Using a New Multi-Block Approach to Numerical Relativity

Dorband, Ernst Nils 23 March 2007 (has links)
Numerical simulations of Kerr black holes are presented and the excitation of quasinormal modes is studied in detail. Issues concerning the extraction of gravitational waves from numerical space-times and analyzing them in a systematic way are discussed. A new multi-block infrastructure for solving first order symmetric hyperbolic time dependent partial differential equations is developed and implemented in a way that stability is guaranteed for arbitrary high order accurate numerical schemes. Multi-block methods make use of several coordinate patches to cover a computational domain. This provides efficient, flexible and very accurate numerical schemes. Using this code, three dimensional simulations of perturbed Kerr black holes are carried out. While the quasinormal frequencies for such sources are well known, until now little attention has been payed to the relative excitation strength of different modes. If an actual perturbed Kerr black hole emits two distinct quasinormal modes that are strong enough to be detected by gravitational wave observatories, these two modes can be used to test the Kerr nature of the source. This would provide a strong test of the so called no hair theorem of general relativity. A systematic method for analyzing ringdown waveforms is proposed. The so called time shift problem, an ambiguity in the definition of excitation amplitudes, is identified and it is shown that this problem can be avoided by looking at appropriately chosen relative mode amplitudes. Rotational mode coupling, the relative excitation strength of co- and counter rotating modes and overtones for slowly and rapidly spinning Kerr black holes are studied. A method for extracting waves from numerical space-times which generalizes one of the standard methods based on the Regge-Wheeler-Zerilli perturbation formalism is presented. Applying this to evolutions of single perturbed Schwarzschild black holes, the accuracy of the new method is compared to the standard approach and it is found that the errors resulting from the former are one to several orders of magnitude below the ones from the latter. It is demonstrated that even at large extraction radii (r=80M), the standard extraction approach produces errors that are dominantly of systematic nature and not due to numerical inaccuracies.
64

Population Boundaries and Gravitational-Wave Templates for Evolving White Dwarf Binaries

Kopparapu, Ravi Kumar 17 November 2006 (has links)
We present results from our analysis of double white dwarf (DWD) binary star systems in the inspiraling and mass-transfer stages of their evolution. Theoretical constraints on the properties of the white dwarf stars allow us to map out the DWD trajectories in the gravitational-wave amplitude-frequency domain and to identify population boundaries that define distinct sub-domains where inspiraling and/or mass-transferring systems will and will not be found. We identify for what subset of these populations it should be possible to measure frequency changes and, hence, directly follow orbit evolutions given the anticipated operational time of the proposed space-based gravitational-wave detector, LISA. We show how such measurements should permit the determination of binary system parameters, such as luminosity distances and chirp masses, for mass-transferring as well as inspiraling systems. We also present results from our efforts to generate gravitational-wave templates for a subset of mass-transferring DWD systems that fall into one of the above mentioned sub-domains. Realizing that the templates from a point-mass approximation prove to be inadequate when the radii of the stars are comparable to the binary separation, we build an evolutionary model that includes finite-size effects such as the spin of the stars and tidal and rotational distortions. In two cases, we compare our model evolution with three-dimensional hydrodynamical models of mass-transferring binaries to demonstrate the accuracy of our results. We conclude that the match is good, except during the final phase of the evolution when the mass transfer rate is rapidly increasing and the mass donating star is severely distorted.
65

Investigation of Superficial Dose from a Static TomoTherapy Beam

Smith, Koren Suzette 10 April 2007 (has links)
Abstract Purpose: The TomoTherapy planning system is capable of creating treatment plans that deliver a homogeneous dose to superficial targets. It is essential that the planning system accurately predicts dose to the surface and superficial depths from beams directed at every angle in the axial plane. This work concentrates on measuring and modeling the dose from a static TomoTherapy beam at normal and oblique incidence. It was hypothesized that superficial doses measured from a static TomoTherapy beam agree with doses calculated by the TomoTherapy planning system to within 5% of the maximum dose for angles of incidence from 0¢ª-83¢ª. Methods: Doses were measured with a parallel-plate chamber and TLDs at depths ¡Â 2cm for 40x2.5cm2 and 40x5cm2 static TomoTherapy beams for multiple SSDs for incident angles of 0¢ª-83¢ª. The measurements made with TLDs were compared to those made with the parallel-plate chamber to verify the measured dose. The TomoTherapy treatment planning system was used to calculate doses from single, static beams incident on a flat phantom so that measured and calculated doses could be compared. Results: Surface dose increased from 16.7%-18.9% as the SSD decreased from 85 to 55cm for the 40x5cm2 field and from 12.7%-14.9% for the 40x2.5cm2 field. The surface dose increased from 16.8%-44.2% as the angle of incidence increased from 0¢ª-83¢ª for the 40x5cm2 field and from 12.8%-42.6% for the 40x2.5cm2 field. For all measurement conditions, the planning system under predicted the dose at the surface by more than 5%. For the following measurement conditions and depths, the planning system also under predicted the dose by more the 5%: 85 and 70cm SCD at a depth of 0.1cm, 55cm SCD at a depth of 0.2cm, and 30¢ª and 45¢ª at a depth of 0.1cm. For 75¢ª and 83¢ª, the planning system over predicted the dose at superficial depths (0.1cm-0.3cm) by as much as 7%. Conclusions: The results of this work indicate that the planning system under predicted the dose at the surface and superficial depths (depths ¡Â 0.3cm) from a static TomoTherapy beam at both normal and oblique incidence by more than 5%.
66

Segmented Field Electron Conformal Therapy Planning Algorithm

Perrin, David Jaquet 11 November 2008 (has links)
Purpose: Segmented-field electron conformal therapy (SFECT) is rarely used, or if used, used sub optimally, primarily due to inadequate tools for its planning. The development of SFECT planning tools could help begin to bring electron therapy to the same level of sophistication as x-ray and proton therapy, resulting in greater consideration by radiation oncologists. The purpose of this work was to develop a forward planning algorithm that will improve segmentation of the SFECT treatment field. It was hypothesized that a forward planning algorithm can produce segmented-field ECT fields that improve dose conformity as the number of beam energies is increased from one to five using the Varian beam set (6, 9, 12, 16, and 20 MeV). Methods: A planning algorithm that allowed each field segment to have its own energy, shape, size, and weighting was developed. The planning algorithm developed an initial plan and then went through several iterations of re-planning based on the dose distributions of each previous plan in order to converge the 90% dose surface to the distal PTV surface. The planning algorithm was used to develop SFECT plans for six hypothetical PTVs and two head and neck patient PTVs. These plans were compared to single-energy plans developed by the same planning algorithm. Results: Conformity improved little beyond allowing three energies due to energy overlap and field-size restrictions. For the hypothetical PTVs, non-PTV treated to 90% of the prescribed dose was reduced compared to the single-energy plans, resulting in improved dose conformity, supporting the hypothesis. The improved conformity came at the expense of increased dose heterogeneity within the PTV. One of the patient plans improved in conformity, supporting the hypothesis and indicating the planning algorithm has the potential to plan patient cases. The other patient case did not improve in conformity and therefore did not support the hypothesis. Conclusion: The planning algorithm was successful in developing plans that improved conformity while still treating the PTV to prescription dose. The planning algorithm has the potential to plan patient SFECT treatments. Future improvements to the algorithm may improve its ability to plan patient cases.
67

Exploring the Quark-Gluon Content of Hadrons: From Mesons to Nuclear Matter

Matevosyan, Hrayr Hamlet 09 July 2007 (has links)
Even though Quantum Chromodynamics (QCD) was formulated over three decades ago, it poses enormous challenges for describing the properties of hadrons from the underlying quark-gluon degrees of freedom. Moreover, the problem of describing the nuclear force from its quark-gluon origin is still open. While a direct solution of QCD to describe the hadrons and nuclear force is not possible at this time, we explore a variety of developed approaches ranging from phenomenology to first principle calculations at one or other level of approximation in linking the nuclear force to QCD. The Dyson Schwinger formulation (DSE) of coupled integral equations for the QCD Greens functions allows a non-perturbative approach to describe hadronic properties, starting from the level of QCD n-point functions. A significant approx- imation in this method is the employment of a finite truncation of the system of DSEs, that might distort the physical picture. In this work we explore the effects of including a more complete truncation of the quark-gluon vertex function on the resulting solutions for the quark 2-point functions as well as the pseudoscalar and vector meson masses. The exploration showed strong indications of possibly large contributions from the explicit inclusion of the gluon 3- and 4-point functions that are omitted in this and previous analyses. We then explore the possibility of ex- trapolating state of the art lattice QCD calculations of nucleon form factors to the physical regime using phenomenological models of nucleon structure. Finally, we further developed the Quark Meson Coupling model for describing atomic nuclei and nuclear matter, where the quark-gluon structure of nucleons is modeled by the MIT bag model and the nucleon many body interaction is mediated by the exchange of scalar and vector mesons. This approach allows us to formulate a fully relativistic theory, which can be expanded in the nonrelativistic limit to repro- duce the well known phenomenological Skyrme-type interaction density functional, thus providing a direct link to well modeled nuclear forces. Moreover, it allows for a derivation of the equation of state for cold uniform dense nuclear matter for application to calculations of the properties of neutron stars.
68

Evalulation of MVCT Images with Skin Collimation for Electron Treatment Planning

Beardmore, Allen 11 July 2007 (has links)
Purpose: To evaluate the accuracy of electron beam dose calculations in MVCT images containing lead alloy masks. Method and Materials: A phantom consisting of two 30x30x5 cm<sup>3</sup> slabs of CIRS plastic water® was imaged using kVCT (GE Lightspeed-RT) and MVCT (TomoTherapy Hi·Art). Nine MVCT scans were taken with different square masks of lead alloy (Cerrobend®, density = 9.4 g·cm<sup>-3</sup>) on top of the phantom. The masks contained square apertures of 3x3 cm<sup>2</sup>, 6x6 cm<sup>2</sup> and 10x10 cm<sup>2</sup> and had thicknesses of 6 mm, 8 mm and 10 mm. The same collimation was simulated in the kVCT images by creating regions-of-interest (ROI) duplicating the sizes, shapes, and density of the masks. Using the Philips Pinnacle<sup>3</sup> treatment planning system, twelve treatment plans were created for the combination of four electron energies (6, 9, 12, and 16 MeV) and the three apertures. For each plan, the mask thickness appropriate for the electron energy was used and the dose distributions calculated using the kVCT and MVCT images were compared. In uniform dose regions dose differences were calculated; in high dose-gradient regions distances-to-agreement (DTA) were measured. Results: In the uniform dose region, the maximum differences of doses in the MVCT images from doses in the kVCT images were greater than or equal to ±5% for all but one opening and energy combination. In the high dose-gradient region, more than half of the maximum DTA values exceeded 2 mm. Analysis of the MVCT images showed that the differences were largely due to two errors. First, the presence of the masks caused distortions in the MVCT numbers such that the calculated dose in the MVCT images penetrated less deeply. Second, distortion in the shape of the image of the collimation caused the calculation algorithm to scatter excess electrons into the central axis of the beam. Conclusion: The presence of Cerrobend® masks in MVCT imaging produces distortions in the CT numbers that make electron beam dose calculations insufficiently accurate for electron beam treatment planning. Supported in part by a research agreement with TomoTherapy, Inc.
69

Feasibility Testing for SalSA Neutrino Astrophysics Project

Marsh, Jarrod Christopher 15 January 2008 (has links)
Research is presented that was performed to determine possible locations for a full scale Salt Dome Shower Array (SalSA) neutrino telescope sensitive to energies from TeV ($10^{12}$ eV), PeV ($10^{15}$ eV), and EeV ($10^{18}$ eV) neutrinos. A detector to test possible site locations was designed around a half-Watt Ham radio to transmit short pulses of 145.500 MHz radiation. The research began by designing the system. Several phases of design took place as the system was refined and calibrated. Once finished, it was taken into a field and tested. The objective was to determine how the signals depended on distance. In principle, there should have been a linear relationship between distance and voltage. After successful testing, the system will be taken to a salt mine to accurately determine the index of refraction and attenuation length of that particular mine's salt.
70

Using Power Spectra To Look For Anisotropies in Ultra-High Energy Cosmic Ray Distributions

McEwen, Megan Alicia 05 September 2007 (has links)
The origins and compositions of ultra-high energy cosmic rays (UHECR) remains a mystery to this day. The Pierre Auger Observatory (PAO) is being constructed now in the hopes that it will help solve this mystery by detecting more UHECR than any previous experiment. In this dissertation, I will discuss this experiment, and analyze the data collected so far by comparing it with simulated data from possible source distributions. In these simulations, I will track antiprotons, along with other possible cosmic ray primaries, through various models of galactic and extragalactic magnetic fields. Once they reach a certain distance, I will record their positions on the sky. These final positions will determine the weight of that position on the sky. This weight will then be applied to possible source distributions, and the particles will be reinjected back to the earth's surface, and the simulated arrival directions will be analyzed. I will be using the method of calculating spherical harmonics coefficients to analyze the data. The method of using these angular power spectra is an attempt to provide a common language for model builders and experimentalists. Anisotropies of any size are easily detected using these coefficients, making them an ideal way to look at observed events that might not be coming from single, point sources. I will compare the results of this analysis with data obtained by the PAO by calculating spherical harmonics coefficients. After comparing the events collected to date by the PAO with three possible source distributions-isotropic, Active Galactic Nuclei, and nearby galaxies-I have observed that the data looks consistent with either nearby galaxies or AGNs as sources. However, there does exist an extra dipole moment inherent to a half-sky exposure, such as the PAO currently has, which adds in an uncertainty that fundamentally undermines the capabilities of large-scale anisotropy analysis. In the absence of clear point-like sources, construction of a detector in the Northern hemisphere will be necessary in order to know the origins of UHECRs with any confidence.

Page generated in 0.0794 seconds