• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • No language data
  • Tagged with
  • 357
  • 357
  • 148
  • 144
  • 74
  • 41
  • 38
  • 27
  • 25
  • 21
  • 21
  • 18
  • 15
  • 11
  • 11
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
271

The use of the Monte Carlo technique in the simulation of small-scale dosimeters and microdosimeters

Baker, Adam Richard Ernest January 2011 (has links)
In order to understand the effects of low keV radiation upon small scales, a number of detector designs have been developed to investigate the ways energy is deposited. This research was conducted in order to investigate a number of different detector designs, looking in particular at their properties as small scale dosimeters exposed to photon radiation with an energy of 5-50 keV. In addition to this, Monte Carlo models were constructed of the different detector designs in order to ascertain the trends in energy absorption within the detectors. An important part of the research was investigating the dose enhancement effects produced when the low Z elements present in human tissues are in proximity to higher Z metallic elements within this energy range. This included looking at dose enhancement due to the photoelectric effect, with a photon energy of 5-50 keV and through the absorption of thermal neutrons. The reason for studying the dose enhancement was twofold - looking at the increase in energy absorption for elements that are currently being investigated for medical applications as well as elements that are present in dosemeters alongside the tissue equivalent elements. By comparing the results produced using the Monte Carlo codes MCNP4C and EGSnrc, simulations were produced for a variety of different detector designs, both solid state and gasfilled. These models were then compared with experimental results and were found to be able to predict trends in the behaviour of some of the detector designs.
272

Asteroseismology of red giant stars : a tool for constraining stellar models

Bossini, Diego January 2016 (has links)
The aim of this thesis is to study stellar evolution and asteroseimology of red-giant stars mainly from a modelling point of view, in particular the impact on core-convective-burning stars of adopting different mixing schemes. Thanks to NASA space telescope Kepler, asteroseismology of thousands of giants provided us new information related to their internal structure, that can be used for finding constraints on their cores. I used several stellar evolution codes (MESA, BaSTI, and PARSEC) to investigate the effect of different mixing schemes in the helium-core-burning stars. Comparing them with observed stars, I concluded that standard stellar models, largely used in literature, cannot describe the combined observed distribution of luminosity and period spacing. I then proposed as solution a penetrative convection model with moderate overshooting parameter. Additional tests on Kepler's open clusters (NGC6791 and NGC6819) and secondary clump stars, allowed me to revised to my mixing model.
273

Investigating the low-frequency stability of BiSON's resonant scattering spectrometers

Davies, Guy R. January 2011 (has links)
The main focus of the thesis is the study of low-degree low-frequency solar p modes from the analysis of high-resolution power spectra generated from 20 years of high-quality data collected by the Birmingham Solar Oscillations Network (BiSON) Resonant Scattering Spectrometers (RSS). To that end we present a novel model of the RSS and its observations that allows for the determination of a significant improvement in calibration for ground-based Sun-as-a-star Doppler velocity observations. We show that the previously neglected multiple scattering in the RSS vapour cell is significant and demonstrate its impact on the spatial weighting to the solar disk, combining the new instrumental weighting with a detailed treatment of terrestrial atmospheric effects and a model of the solar surface velocity field. The resulting simulation allows for the development of a new and successful correction for differential atmospheric extinction generating up to a 25% increase in the signal-to-noise ratio at low frequencies (0.8 to 1.3 mHz). The improvement in signal to noise allows for the detection of low-frequency p modes with small associated errors in frequency and together with the fitting of mode structure, produces estimates of mode linewidth and power. Over the frequency range 972 to 1850 microHz we find the exponent of the frequency-linewidth dependence to be 7.5(0.4).
274

Asteroseismology of cool stars : testing scaling laws and detecting signatures of rapid structure variation

Rodrigues Coelho, Hugo January 2017 (has links)
First, we investigated the ν max scaling relation, a widely-used equation that states that the frequency of maximum amplitude in a power spectrum scales with a combination of surface gravity and effective temperature. We tested how well the oscillations of cool main-sequence and sub-giant stars follow this relation, using a ensemble of asteroseismic targets observed by Kepler. We then tested seismic scaling relations in a small group of 10 bright red-giant stars observed by Kepler. These giants, some of the brightest observed in the Kepler field, have precise values of parallaxes. We compared the measured distances with inferences made using asteroseismic parameters. We also combined high-quality spectroscopic data with seismic constraints to determine their evolutionary phase. We compared the observed surface abundances of lithium and carbon with models that account for additional mixing processes in redgiants. Finally, we analyzed a group of 13 stars observed by Kepler, and use asteroseismic tools to extract modelindependent information about their internal regions. Our objective is to detect the so-called acoustic glitches, characterized as departures from the uniform frequency spacings predicted by the asymptotic relation. Such departures originate in regions where there is an abrupt change in the stratification of the star.
275

Gravitational-wave astronomy with coalescing compact binaries : detection and parameter estimation with advanced detectors

Smith, Rory James Edwin January 2013 (has links)
The current generation of interferometric gravitational-wave detectors, LIGO and Virgo, are undergoing upgrade to their so-called advanced phase. These instruments, together with new instruments in Japan and India, KAGRA and LIGO India, will form a network of advanced gravitational-wave detectors with which detections are expected to become routine. Amongst the prime sources for gravitational- wave astronomy are coalescing compact binaries consisting of neutron stars and/or black holes. Filtering detector data to detect these sources relies on precise templates of the expected gravitational-wave signals. In addition, estimating the parameters encoded in the signals (masses, spins etc.) requires sophisticated Bayesian inference techniques. Templates are typically computationally expensive to generate and can be a bottle-neck in data analysis. Here we focus on two aspects of gravitational-wave astronomy using coalescing compact binaries. The first part of this thesis focuses on studying the requirements of template waveforms to detect intermediate-mass black holes through the coalescence of a stellar- mass companion into an intermediate-mass black hole in Advanced LIGO. The second part of this thesis focuses on numerical and analytic techniques to improve the efficiency of (Bayesian) parameter estimation on coalescing binaries when parameter estimation is dominated by template waveform generation. Such efficiency improvements to parameter estimation are crucial for gravitational-wave astronomy using advanced detectors.
276

Extragalactic X-ray binaries : black holes and neutron stars in Centaurus A

Burke, Mark J. January 2013 (has links)
This thesis presents research into the X-ray binary population of NGC 5128 (Centaurus A). The two principle investigations focus on the identification of black hole candidates, which can be identified by their long term variability and spectral properties. We demonstrate this with what we believe is our best example; a source that faded over two months of observations and displayed cool disc thermal-dominant spectra when at high luminosities- similar to the Galactic black hole X-ray binaries. The main result of this research is that the population of black hole X-ray binaries is more pronounced in the dust lane of the galaxy compared to in the halo. The explanation of this result, based around the mass of the donor stars required for systems to emit at the observed luminosities, may also explain the long noted effect of a steepening of the X-ray luminosity function in early-type galaxies at a few 10٨38 erg/s; an effect that increases with the age of the stellar population. Finally, frequent Chandra observations of the NGC 5128 were used to investigate the two known ultraluminous X-ray sources. These are transient systems and were observed at luminosities (1-10)% of their peak, in the regime frequented by the Galactic X-ray binaries. This presented an exciting opportunity to study the lower luminosity behaviour of these systems in an effort to determine the mass of the accreting compact object. The results of the spectral analysis point towards accretion powered by a stellar, rather than intermediate mass black hole. The long term variability of these sources is reminiscent of several of the long period Galactic X-ray binaries.
277

Galaxy groups from observations and simulations

Dariush, Aliakbar January 2009 (has links)
The cold dark matter model has become the leading theoretical paradigm for the formation of structure in the Universe. Together with the theory of cosmic inflation, this model makes a clear prediction for the initial conditions for structure formation and predicts that structures grow hierarchically through gravitational instability. As a result, small structures collapse first and eventually build large structures such as groups and cluster of galaxies. While clusters are among the most massive bound structures in the Universe, groups are more numerous and most of the galaxies reside within galaxy groups. Testing this model requires that the precise measurements delivered by galaxy surveys can be compared to robust and equally precise theoretical models. The current project consists of two parts. In the first part, we investigate the existence and evolution of early-formed fossil galaxy groups, and the development of the luminosity gap between its brightest galaxies. We study the correlation of these properties with the group mass assembly history, by comparing observations to the Millennium simulation of dark matter particles, and the associated semi-analytic catalogues of galaxies, together with the Millennium gas simulation. Fossil Galaxy Groups are believed to be the end result of galaxies merging within a normal galaxy group, leaving behind the X-ray halo characteristic of a group. The sample of fossils in our study are selected according to the useful definition of fossil groups. The luminosity gap statistics in the Millennium Run are compared to the theoretical models. The study of the mass evolution of fossils shows that in comparison to normal groups, fossils are more evolved systems and have assembled their masses at higher redshifts, while normal groups are still evolving. Our work suggests the earlier formation and higher mass concentration of fossil systems. The estimated space densities from the Millennium Run are smaller (sometimes in agreement within the range of errors in the observations) than those obtained from the observations. Furthermore, we study the development of magnitude gap from a general point of view and its correlation with the mass assembly of groups and clusters of galaxies using the same dark matter simulations. The results show that the current definition of fossils, based on the magnitude gap Δm₁₂≤2, does not satisfy the necessity for a group or cluster to be an early formed system. Moreover, the fossil phase (the duration in which the magnitude gap of a galaxy group remains always above a threshold value, i.e. Δm₁₂≤2 is a temporary phase in the life of groups, and most groups would experience such a phase in their lifetime. We revise the current optical definition of fossil groups, by studying the evolution and history of various physical parameters associated with the mass assembly of galaxy groups and clusters. In the second part of this dissertation, we study the optical properties of a sample of 25 optically selected groups from the XMM-IMACS (XI) project. The project aims to improve our knowledge of how the dynamics and properties of group galaxies describe the global characteristics of groups, by using a combination of radio, X-ray, infrared, and optical observations together with the imaging and spectroscopy of the group galaxy population. The observations were performed during three observing run at the Las Campanas observatory. Image processing and precise astrometry was done for spectroscopic follow-up observations. Group virial radii were found by combining the spectroscopic results together with those from the Millennium simulation. Finally we determined the group luminosity functions using the overdensity radii, with the extracted colour-magnitude relation from the spectroscopic observations, and find that the luminosity function of optically selected groups are very similar to that of X-ray selected groups.
278

Photospheric albedo and the measurement of energy and angular electron distributions in solar flares

Dickson, Ewan Cameron Mackenzie January 2013 (has links)
In this thesis I examine the role of Compton back-scatter of solar flare Hard X-rays, also known as albedo, in the inference of the parent electron spectrum. I consider how albedo affects measurements of the energy and angular distributions when the mean electron flux spectrum in a solar flare is inferred using regularised inversion techniques. The angular distribution of the accelerated electron spectrum is a key parameter in the understanding of the acceleration and propagation mechanisms that occur in solar flares. However, the anisotropy of energetic electrons is still a poorly known quantity, with observational studies producing evidence for an isotropic distribution and theoretical models mainly considering the strongly beamed case. First we investigate the effect of albedo on the observed spectrum for a variety of commonly considered analytic forms of the pitch angle distribution. As albedo is the result of the scattering of X-ray photons emitted downwards towards the photosphere different angular distributions are likely to exhibit a varying amount of albedo reflection, in particular, downward directed beams of electrons are likely to produce spectra which are strongly influenced by albedo. The low-energy cut-off of the non-thermal electron spectrum is another significant parameter which it is important to understand, as its value can have strong implications for the total energy contained in the flare. However, both albedo and a low energy cut-off will cause a flattening of the observed X-ray spectrum at low energies. The Ramaty High Energy Solar Spectroscopic Imager (RHESSI) X-ray data base has been searched to find solar flares with weak thermal components and flat photon spectra in the 15 - 20 keV energy range. Using the method of Tikhonov Regularisation, we determine the mean electron flux distribution from count spectra of a selection of these events. We have found 18 cases which exhibit a statistically significant local minimum (a dip) in the range of 10 - 20 keV. The positions and spectral indices of events with low-energy cut-off indicate that such features are likely to be the result of photospheric albedo. It is shown that if the isotropic albedo correction was applied, all low-energy cut-offs in the mean electron spectrum were removed. The effect of photospheric albedo on the observed X-ray spectrum suggest RHESSI observations can be used to infer the anisotropy in the angular distribution of X-ray emitting electrons. A bi-directional approximation is applied and regularized inversion is performed for eight large flare events viewed by RHESSI to deduce the electron spectra in both downward (towards the photosphere) and upward (away form the photosphere) directions. The electron spectra and the electron anisotropy ratios are calculated for broad energy range from about 10 and up to ~ 300 keV near the peak of the flares. The variation of electron anisotropy over short periods of time intervals lasting 4, 8 and 16 seconds near the impulsive peak has been examined. The results show little evidence for strong anisotropy and the mean electron flux spectra are consistent with the isotropic electron distribution. The inferred X-ray emitting electron spectrum is likely to have been modified from the accelerated or injected distribution by transport effects thus models of electron transport are necessary to connect the observations. We use the method of stochastic simulations to investigate the effect of Coulomb collisions on an electron beam propagating through a coronal loop. These simulations suggest that the effect of Coulomb collisions on a uniformly downward directed beam as envisaged in the collisional thick target model is not strong enough to sufficiently scatter the pitch angle distribution to be consistent with the measurements made in the previous chapter. Furthermore these simulations suggest that for the conditions studied the constraints inferred in Chapter 4 are only consistent with a low level of anisotropy in the injected electron distribution.
279

Searches for continuous and transient gravitational waves from known neutron stars and their astrophysical implications

Pitkin, Matthew David January 2006 (has links)
We have used data from the third and fourth science runs of the laser interferometric gravitational wave detectors LIGO and GEO600 to produce upper limits on the emission of gravitational waves from a selection of known neutron stars. Two different emission mechanisms are looked into; i) the emission of continuous gravitational waves from triaxial neutron stars; and ii) emission of quasi-normal mode ring-downs from glitching neutron stars. We have produced upper limits on the gravitational wave amplitude and ellipticity for 93 known pulsars assuming continuous emission via triaxiality. This selection of pulsars includes the majority of currently known pulsars with frequencies > 25 Hz, with many within binary systems and globular clusters. New algorithms to take into account the motions within binary systems and possible effects of pulsar timing noise are presented. Also shown is the first analysis to combine the data sets from two distinct science runs as a method of lowering the upper limits. The results are starting to push into the range of plausible neutron star ellipticities, with the Crab pulsar closely approaching the limit that can be set through spin-down arguments. For the 32 of these pulsars in globular clusters the results provide upper limits independent of the cluster dynamics. The astrophysical significance of these results is discussed. Along with results from true pulsars we also present the extraction of simulated signals injected into the interferometers during the science runs. These provide validation checks of both the extraction software and the coherence of the detectors. Two techniques are discussed in relation to searching for quasi-normal mode ring-down signals from excited neutron stars, for example during a glitch; one based on matched filtering and the other based on Bayesian evidence. These are both applied to a search for such a signal from SGR1806 20 during a GRB on 27th December 2004, using the LIGO H1 detector and GEO600 data. This search provided upper limits on the energy released in gravitational waves via quasi-normal modes over the range of frequencies from 1-4 kHz. These are compared with results from a previous search using the bar detector AURIGA (Baggio et al, 2005) and theoretical arguments. The limitations of the search and search techniques, and possible extensions to these, are discussed. The future of these searches is discussed with regard to extensions to the analysis techniques and number of potential sources. Particular emphasis is placed on searches using data from the current LSC S5 science run.
280

A principal component approach to space-based gravitational wave astronomy

Leighton, Michele Dawn January 2016 (has links)
The current approach to data analysis for the Laser Interferometry Space Antenna (LISA) depends on the time delay interferometry observables (TDI) which have to be generated before any weak signal detection can be performed. These are linear combinations of the raw data with appropriate time shifts that lead to the cancellation of the laser frequency noises. This is possible because of the multiple occurrences of the same noises in the different raw data. Originally, these observables were manually generated starting with LISA as a simple stationary array and then adjusted to incorporate the antenna's motions. However, none of the observables survived the flexing of the arms in that they did not lead to cancellation with the same structure. The principal component approach is another way of handling these noises that was presented by Romano and Woan which simplified the data analysis by removing the need to create them before the analysis. This method also depends on the multiple occurrences of the same noises but, instead of using them for cancellation, it takes advantage of the correlations that they produce between the different readings. These correlations can be expressed in a noise (data) covariance matrix which occurs in the Bayesian likelihood function when the noises are assumed be Gaussian. Romano and Woan showed that performing an eigendecomposition of this matrix produced two distinct sets of eigenvalues that can be distinguished by the absence of laser frequency noise from one set. The transformation of the raw data using the corresponding eigenvectors also produced data that was free from the laser frequency noises. This result led to the idea that the principal components may actually be time delay interferometry observables since they produced the same outcome, that is, data that are free from laser frequency noise. The aims here were (i) to investigate the connection between the principal components and these observables, (ii) to prove that the data analysis using them is equivalent to that using the traditional observables and (ii) to determine how this method adapts to real LISA especially the flexing of the antenna. For testing the connection between the principal components and the TDI observables a 10x 10 covariance matrix containing integer values was used in order to obtain an algebraic solution for the eigendecomposition. The matrix was generated using fixed unequal arm lengths and stationary noises with equal variances for each noise type. Results confirm that all four Sagnac observables can be generated from the eigenvectors of the principal components. The observables obtained from this method however, are tied to the length of the data and are not general expressions like the traditional observables, for example, the Sagnac observables for two different time stamps were generated from different sets of eigenvectors. It was also possible to generate the frequency domain optimal AET observables from the principal components obtained from the power spectral density matrix. These results indicate that this method is another way of producing the observables therefore analysis using principal components should give the same results as that using the traditional observables. This was proven by fact that the same relative likelihoods (within 0.3%) were obtained from the Bayesian estimates of the signal amplitude of a simple sinusoidal gravitational wave using the principal components and the optimal AET observables. This method fails if the eigenvalues that are free from laser frequency noises are not generated. These are obtained from the covariance matrix and the properties of LISA that are required for its computation are the phase-locking, arm lengths and noise variances. Preliminary results of the effects of these properties on the principal components indicate that only the absence of phase-locking prevented their production. The flexing of the antenna results in time varying arm lengths which will appear in the covariance matrix and, from our toy model investigations, this did not prevent the occurrence of the principal components. The difficulty with flexing, and also non-stationary noises, is that the Toeplitz structure of the matrix will be destroyed which will affect any computation methods that take advantage of this structure. In terms of separating the two sets of data for the analysis, this was not necessary because the laser frequency noises are very large compared to the photodetector noises which resulted in a significant reduction in the data containing them after the matrix inversion. In the frequency domain the power spectral density matrices were block diagonals which simplified the computation of the eigenvalues by allowing them to be done separately for each block. The results in general showed a lack of principal components in the absence of phase-locking except for the zero bin. The major difference with the power spectral density matrix is that the time varying arm lengths and non-stationarity do not show up because of the summation in the Fourier transform.

Page generated in 0.4577 seconds