221 |
A principal component analysis of gravitational-wave signals from extreme-mass-ratio sourcesBloomer, Edward Joseph January 2010 (has links)
The Laser Interferometer Space Antenna (LISA) will detect the gravitational wave emissions from a vast number of astrophysical sources, but extracting useful information about individual sources or source types is an extremely challenging prospect; the large number of parameters governing the behaviour of some sources make exhaustively searching this parameter space computationally expensive. We investigate the potential of an alternative approach, with a focus on detecting the presence of particular inspiraling binary source signals within a timeseries of gravitational wave data, and quickly providing estimates of their coalescence times. Specically, we use Principal Component Analysis (PCA) to identify redundancy within the parameter space of Extreme Mass Ratio Inspiral (EMRI) sources and construct a new, smaller parameter space containing only relevant signal information. We then create a simple search method based on how gravitational wave signals project into this new parameter space. Test cases indicate that a small number of principal components span a space occupied by the majority of EMRI spectrograms, but non-EMRI signals (including noise) do not inhabit this space. A PCA-based search method is capable of indicating the presence of gravitational waves from EMRI sources within a new test spectrogram. The results of our PCA-based searches show that the method could be used to provide initial estimates of EMRI coalescence times quickly, to be used as initial data for a more thorough search.
|
222 |
Spin observables in kaon photoproduction from the neutron in a deuterium target with CLASHassall, Neil January 2010 (has links)
This work presents the first ever measurements of several polarization observables for the reactions gamman->K0Lambda and gamman->K0Sigma0.
|
223 |
Application of effective medium theory to the analysis of integrated circuit interconnectsHolik, Sonia Maria January 2010 (has links)
The design and physical verification of contemporary integrated circuits is a challenging task due to their complexity. System-in-Package is an example of generally congested electronic components and interconnects which in the initial design process rely on computationally intensive electromagnetic simulations. Hence the available computer memory capacity and computational speed become meaningful limitations. An alternative method which allows the designer to overcome or reduce the limits is desired. This work represents the first demonstration of the application of effective medium theory to the analysis of those segments of the entire integrated system where the interconnect networks are more dense. The presented approach takes advantage of the deep subwavelength characteristic of interconnect structures. In order to achieve the aim of defining the homogeneous equivalent for the interconnect grating structure a few steps were followed towards proving the homogenisation concept and finally presenting it by an analytical formulation. A set of parameters (metal fill factor, aspect ratio, dielectric background and period-to-wavelength ratio) with values related to typical design rules were considered. Relating these parameters allows the empirical models to be defined. In order to show the relationship between existing effective medium theories and those developed in this Thesis, the presented empirical models are defined in terms of the Maxwell-Garnett mixing rule with an additional scaling factor. The distribution of the scaling factor was analysed in terms of the calculated reflection and transmission coefficients of the homogenised structures that are equivalent to a given grating geometry. Finally the scaling factor, for each empirical model, was expressed by an analytical formula and the models validated by their application to the numerical analysis of grating structures. The numerical validation was carried out by comparing the reflection and transmission coefficients obtained for the detailed and homogenised structures. In order to ensure the empirical models can be broadly employed, the performance of the model in the presence of non-normally incident plane wave was evaluated. For the range of angles ±30º the model is accurate to 5%. The impact of the shape of the grating, specifically the case of a tapered profile, typical of actual fabricated interconnects was also considered, with sidewall tapers of up to 5º giving the same error not higher than 5%. Experimental validation of the application of the homogenisation concept to the analysis of interconnects is desired for two main applications: for the reflectivity estimation of a whole chip in a System-in-Package and for the performance estimation of interconnects on lower metal layers in an interconnect stack. For the first, free-space measurements are taken of a grating plate with copper rods aligned in parallel illuminated by a plane wave in the X-band (8.2-12.4 GHz). For the second, S-parameters are measured for microstrip waveguides with a number of metal rods embedded in the substrate between the signal line and ground plane. The good agreement with the simulations validates the homogenisation approach for the analysis of interconnects.
|
224 |
The design and production of the LHCb VELO high voltage system and analysis of the Bd ⇒ K*μ+μ- rare decayRakotomiaramanana, Barinjaka January 2010 (has links)
LHCb is the dedicated flavour physics experiment of the LHC. The experiment is designed for probing new physics through measurements of CP violation and rare decays. This thesis includes simulation studies of the Bd ⇒ K*μ+μ- decay. The LHCb vertex locator (VELO) is the highest precision tracking detector at the LHC and is used to identify primary and secondary vertices for the identification of the $b$ and $c$ hadrons. The VELO modules contain silicon strip detectors which must be operated under reverse bias voltage. This thesis presents the work performed on the design, production and characterisation of the VELO high voltage system. The VELO operates only 8\mm~from the LHC beam in a high radiation environment. A future upgrade will require operation at up to 10$^{16}$ n_{eq}cm^{-2}.This thesis presents a characterisation of p-type silicon sensors before and after heavy irradiations. The design of the HV system and the substantial programme of quality assurance tests performed on both the hardware and software parts of the VELO high voltage system are described. The tests cover normal operation and consideration of a range of failure scenarios. The hardware and software limits were tested. The stability of the output over time and the noise of the system were assessed. The performance is found to meet the specification, although problems at low voltage and low current operation are seen. An analysis of the current-voltage data during module production, and commissioning up to first LHC operation is given. No obvious signs of sensor degradation are seen. The VELO high voltage system complies with the safety and performance requirements in the environment where it is used and has been successfully operated throughout the first period of the LHC operations. With its current design, LHCb expects to collect approximately 10 fb^{-1} of data, running beyond this will require an upgrade of LHCb requiring more radiation hard silicon strip detectors. P-type silicon strip detectors are one possible candidate for the upgraded LHC. Tests performed on p-type detectors with four types of isolation techniques are detailed. The breakdown voltages and the full depletion voltage before irradiation are measured. Breakdown voltages above 1000 V are found from each type of isolation technique, except for an isolation scheme with individual p-stops. The average depletion voltage is approximately 170 V. The current-voltage characteristics, breakdown voltage and the charge collection measurements of five irradiated p-type detectors are measured. Approximately 30 % of the maximum charge is collected at a fluence of 10$^{16}$n_{eq}cm^{-2} for a bias voltage of 1000 V. At a fluence of 2*10$^{15}$n_{eq}cm^{-2}, the detector with p-spray could be biased at a higher voltage before breakdown than the detector with common p-stops. This follows the expectation that the p-spray technology gives better results under irradiation. The n-on-p detectors are found to be promising candidates at the fluences expected at the high luminosity upgrade of the LHC (SLHC), albeit that at the highest fluences the charge collection efficiency is significantly reduced and they must be operated at high voltages and low temperatures. The Bd ⇒ K*μ+μ- decay is a rare Flavour Changing Neutral Current decay which proceeds via the b ⇒s transition. This decay is one of the golden modes in LHCb due to its sensitivity to New Physics contributions beyond the Standard Model of particle physics from the measurement of observables, such as the forward backward asymmetry ($A_{FB}$) and its zero crossing point ($S_{0}$). The Bd ⇒ K*μ+μ- event selection is described and is used to evaluate the signal and background yields. The estimated signal yield from the simulation is 4360$^{+1160}_{-1040}$ events/2 fb^{-1}. The background rate is estimated to be 5300 $\pm$ 1800 events/2 fb^{-1}. A binned and unbinned method of extraction of $A_{FB}$ and $S_{0}$ are discussed. The unbinned method gives direct access to the value of $S_{0}$, while the binned method may introduce a small bias to the mean value of $S_{0}$ due to assumptions of fitting the data close to the crossing point. It is estimated that the $S_{0}$ can be obtained with an accuracy of $\pm$ 1.1 GeV^{2}/C^{4}, $\pm$ 0.38 GeV^{2}/C^{4} and $\pm$ 0.17 GeV^{2}/C^{4} with data samples of 0.2 fb^{-1}, 2 fb^{-1} and 10 fb^{-1}, respectively. The effect of the VELO (and other tracking detectors) misalignments on the analysis is also studied. It is found that significant misalignments can have a large effect on event selection efficiency. However, at the current level of alignment obtained from the first LHC data the effect is already expected to give a less than 10 % change. A method to study the effects of misalignments directly on $A_{FB}$ is also demonstrated.
|
225 |
Quantum entanglement of the spatial modes of lightJack, Barry January 2012 (has links)
This thesis is a dissemination of the experimental work I have carried out in the last three and a half years, under supervision of Prof. Miles Padgett and Dr. Sonja Franke-Arnold. Presented within are seven unique experiments investigating the orbital angular momentum (OAM) states of light, and the associated spatial modes. Six of these experiments relate to measurements on quantum-entangled photon pairs produced in down-conversion. The first chapter of my thesis is a brief review of the some of the contributions made to the field of research of OAM, both involving classical and quantum states of light. This chapter introduces some of the hallmark experiments within the subject, from which my experimental work reported in this thesis is inspired. The second chapter details the set up of the down conversion experiment, and the experimental techniques used to design a fully functioning quantum measurement system. Most importantly, this includes the holographic techniques used to measure the spatial states of the photon pairs. In addition to holographic measurements, a system to holographically auto-align the down-conversion experiment was developed. Due to the sensitive nature of the experiments presented, this automated system has been crucial to the success of all of the single photon experiments presented within this document. The experimental results are split into three separate categories. The first (Chapter 3) describes measurements investigating the Fourier relationship between OAM and angular position states, both at the classical and quantum levels. The following chapter (Chapter 4) consists of four experiments designed to quantify the degree of entanglement of states of OAM and angular position. This includes the first demonstration of the historic EPR (Einstein-Podolsky-Rosen) paradox for OAM and angle states, violation of a Bell-type inequality for arbitrary OAM states, and characterisation of the density matrices for a range of OAM state-spaces. The final chapter (Chapter 5) reports a new type of ghost imaging using down-converted photon pairs. In this experiment, we violate a Bell inequality within a ghost image, demonstrating the entangled nature of our system and contributing a new element to the long standing contention over quantum vs. classical features within ghost imaging. These experiments have seen a wide range of collaboration. The experimental work on the Fourier relation on single photons was carried out in collaboration with Dr. Anand Kumar Jha (University of Rochester). The work on ghost imaging was performed with collaboration with Prof. Monika Ritsch-Marte (Innsbruck Medical University), and the angular EPR paradox work was carried out in collaboration with Prof. Robert Boyd (Univ. of Rochester) and Prof. David Ireland (Univ. of Glasgow). The work I present here is experimental, however any theoretical developments are in a large part due to the support of Dr. Sonja Franke-Arnold and Prof. Steve Barnett (Univ. of Strathclyde).
|
226 |
Study of the dynamics of soft interactions with two-particle angular correlations at ATLASOropeza Barrera, Cristina January 2012 (has links)
Measurements of inclusive two-particle angular correlations in proton-proton collisions at centre-of-mass energies of 900 GeV and 7 TeV are presented. The events were collected with the ATLAS detector at the LHC, using a single-arm minimum-bias trigger, during 2009 and 2010. Correlations are measured for charged particles in the kinematic range defined by pT > 100 MeV and |eta| < 2.5. In total, integrated luminosities of 7 1/ub and 190 1/ub are analysed for 900 GeV and 7 TeV data, respectively. At 900 GeV only events with a charged particle multiplicity n_ch >= 2 are analysed whereas at 7 TeV, a second phase-space region of n_ch >= 20, with a suppressed contribution from diffractive events, is also explored. Data is corrected using a novel approach in which the detector effects are applied repeatedly to the observable distribution and then extrapolated to a detector effect of zero. A complex structure in pseudorapidity and azimuth is observed for the correlation function at both collision energies. Projections of the two-dimensional correlation distributions are compared to the Monte Carlo generators Pythia 8 and Herwig++ as well as the AMBT2B, DW and Perugia 2011 tunes of Pythia 6. The strength of the correlations seen in the data is not reproduced by any of the models.
|
227 |
Measurement of polarisation observables using linearly polarised photons with the crystal ball at MAMIHowdle, David A. January 2012 (has links)
In order to further study and expand the kinematic coverage of polarisation observables in pseudoscalar meson photoproduction, a measurement of polarisation observables has been performed at the MAMI facility in Mainz, Germany. The measurement used a beam of linearly polarised photons using the coherent bremsstrahlung method and the Glasgow Tagged Photon Spectrometer. The photon beam was incident on an $lH_{2}$ target in order to produce the meson photoproduction reaction $\gamma p \to \pi^{0} p$. This target was housed in the centre of the Crystal Ball detector which was used to detect the reaction products. A carbon polarimeter was used to measure the polarisation of the recoiling proton through secondary scattering. The polarisation observables for pseudoscalar meson photoproduction which were measured included: $\Sigma$, the modulation induced in the reaction products by the linearly polarised photon beam; $O_{x}$, the transfer of linear polarisation from the beam to the recoiling proton; and $T$, the polarisation inherent in the target proton. These measurements were performed over a wide kinematic range in both photon energy and polar angle in the centre of mass system, and were compared to three partial wave analyses, SAID, MAID and Bonn-Gatchina. The results contribute to the ongoing search for a complete understanding of the nucleon's excitation spectrum, and significantly enhance the world dataset for these polarisation observables.
|
228 |
Characterisation of silicon-silicon hydroxide catalysis bonds for future gravitational wave detectorsBeveridge, Nicola Louise January 2012 (has links)
The first generation of gravitational wave detectors are currently undergoing significant upgrades to increase their sensitivity by a factor of ten. These upgrades include the installation of quasi-monolithic silica suspensions in an attempt to reduce the thermal noise of the test masses and their suspensions. Fused silica fibres are welded to fused silica interface pieces, called ‘ears’, which provide suitable welding points onto the sides of the mirror when bonded to the mirror using the high strength chemical jointing technique of hydroxide-catalysis bonding. Plans are developing for the design of potential ‘future generation’ gravitational wave detectors. These detectors may operate at cryogenic temperatures to further reduce thermal noise. Silicon is a prime candidate material for use in the test masses and their suspensions because of its desirable thermo-mechanical properties in the cryogenic regime. With some adaptation, hydroxide catalysis bonding may also be a viable technique for use in third generation detectors; however, to evaluate its suitability it is essential to quantify both the strength of silicon-silicon bonds at cryogenic temperatures and the mechanical loss of such bonds, as this has a direct effect on the contributions of the bond to the overall thermal noise of a bonded suspension. To make bonding of silicon components possible, the bonding surfaces must ideally have a thin coating of SiO2, with which the hydroxide can react to form the bond. In Chapters 3 and 4, the strength of hydroxide catalysis bonds between silicon blocks at room and cryogenic temperatures is investigated. Chapter 3 investigates the minimum required thickness of SiO2 necessary for a successful bond. The bond strength, tested using a 4-point bend strength test, is found to reduce significantly with oxide layer thicknesses below 50 nm at cryogenic temperature. A Weibull analysis of the results showed a characteristic strength of approximately 41MPa at 77K and 35MPa at room temperature for samples with a minimum oxide layer of 50 nm. In chapter 4 the effect on the oxide layer deposition method and the purity of the silicon ingot on the strength of the bond are studied. Bend strength tests were performed on hydroxide-catalysis bonds formed between silicon samples of different crystallographic orientation and purity that had been oxidised using a range of methods. The three methods used were; dry thermal oxidation, ion beam sputtering and e-beam deposition. It was found that the method used influenced the strength of the resulting bond, with the e-beam deposited layers producing the weakest samples. It is postulated that the reason for the lower strength of the e-beam samples is correlated with the lower density of this type of coating compared with other coating methods. The mechanical loss of the bond between silicon cantilevers between 10K and 250K was measured in Chapter 5. The experimental set up is described, the results are presented and then analysed to establish an upper limit of 0.12 for the second bending mode below 100K. The lowest loss measured was 0.06 at 12K.
|
229 |
Numerical modelling of low temperature plasmaMacLachlan, Craig S. January 2009 (has links)
The intention of this thesis is to gain a better understanding of basic physical processes occurring in low temperature plasmas. This is achieved by applying both analytic and numerical models. Low temperature plasmas are found in both technological and astrophysical contexts. Three different situations are investigated: an instability in electronegative plasmas; electron avalanches during plasma initiation; and a phenomenon called the Critical Ionisation Velocity interaction. Industrial plasma discharges with electronegative gases are found to be unstable in certain conditions. Fluctuations in light emission, particle number densities and potential are observed. The instability has been reproduced in a variety of experiments. Reports from the experiments are discussed to characterise the key features of the instability. An, as yet un-considered, physical process that could explain the instability is introduced. The instability relies on the plasma's transparency to the electric field. This mechanism is investigated using simple zero-dimensional numerical and analytic models. The results from the models are compared to experimental results. The calculated frequencies are in good agreement with the experimental measurements. This shows that the instability mechanism described here is relevant. For the remaining two problems a three-dimensional particle model is constructed. This model calculates the trajectories of each individual particle. The potential field is solved self-consistently on a computational mesh. Poisson's equation is solved using a Multigrid technique. This iterative solution method uses many grids, of different resolutions, to smooth the error on all spatial scales. The mathematical foundation and details of the components of the Multigrid method are presented. Several test cases where analytic solutions of Poisson's equation exist are used to determine the accuracy of the solver. The implemented solver is found to be both efficient and accurate. Collisions are vitally important to the evolution of plasmas. The chemistry resulting from collisions is the reason why plasmas are so useful in technological applications. Electron collisions are included in the particle model using a Monte-Carlo technique. A basic method is given and several improvements are described. The most efficient combination of improvements is determined through a series of test cases. The error resulting from the collision selection process is characterised. Technological plasmas are formed from the electrical breakdown of a neutral gas. At atmospheric pressure the breakdown occurs as an electron avalanche. The particle model is used to simulate the nanosecond evolution of the avalanche from a single electron-ion pair. Special attention is paid to the inelastic collisions and the creation of metastables. The inelastic losses are used to estimate the photon emission from the electron avalanche. The Critical Ionisation Velocity phenomena is investigated using the particle model. When a neutral gas streams across a magnetised plasma the ionisation rate increases rapidly if the speed of the neutrals exceeds a critical value. Collisions between neutrals and positive ions create pockets of unbalanced negative charge. Electrons in these pockets are accelerated by their potential field and can reach energies capable of ionisation. The evolution of such an electron overdensity is simulated and their energy gain under different density and magnetic field conditions is calculated. The results from the simulation may explain the discrepancy between laboratory and space experiments.
|
230 |
Neutrino masses and Baryogenesis via Leptogenesis in the Exceptional Supersymmetric Standard ModelLuo, Rui January 2010 (has links)
Neutrino oscillation experiments discover that (left-handed) neutrinos have masses much less than charged leptons and quarks in the Standard Model. One solution to the light neutrino mass puzzle is the seesaw model where right-handed neutrinos are introduced with large Majorana masses. The heavy Majorana right-handed (RH) neutrinos lead to lepton number violation in the early universe. They decay into either leptons or anti-leptons via Yukawa couplings. The CP asymmetries of these decays result in lepton number asymmetry in the universe. The lepton number asymmetry can be converted into baryon number asymmetry via the electroweak sphaleron process. This mechanism explains the baryon asymmetry of universe problem and is called leptogenesis. However, one finds that in order to generated enough baryon number in the universe, the reheating temperature, which is required to be of order of the lightest right-handed neutrino mass, has to be higher than ∼ 10^9 GeV. The high reheating temperature would lead to the over-produced gravitinos in the universe, contrasting with the present observation. We investigate leptogenesis in the Exceptional Supersymmetric Standard Model. We find that the extra Yukawa couplings would enhance the CP asymmetries of the RH neutrino decay drastically. And the evolution of lepton/baryon asymmetries is described by Boltzmann Equations. Numerical calculation of the Boltzmann Equations shows that a correct amount of baryon number in the universe can be achieved when the lightest right-handed neutrino mass is ∼ 10^7 GeV, and then the gravitino-over-production problem is avoided.
|
Page generated in 0.0185 seconds