• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 142
  • 25
  • 11
  • 11
  • 5
  • 2
  • 1
  • 1
  • 1
  • 1
  • Tagged with
  • 271
  • 158
  • 154
  • 147
  • 88
  • 61
  • 50
  • 46
  • 44
  • 41
  • 40
  • 40
  • 40
  • 38
  • 38
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
101

Research and development of accounting system in grid environment

Chen, Xiaoyn January 2010 (has links)
The Grid has been recognised as the next-generation distributed computing paradigm by seamlessly integrating heterogeneous resources across administrative domains as a single virtual system. There are an increasing number of scientific and business projects that employ Grid computing technologies for large-scale resource sharing and collaborations. Early adoptions of Grid computing technologies have custom middleware implemented to bridge gaps between heterogeneous computing backbones. These custom solutions form the basis to the emerging Open Grid Service Architecture (OGSA), which aims at addressing common concerns of Grid systems by defining a set of interoperable and reusable Grid services. One of common concerns as defined in OGSA is the Grid accounting service. The main objective of the Grid accounting service is to ensure resources to be shared within a Grid environment in an accountable manner by metering and logging accurate resource usage information. This thesis discusses the origins and fundamentals of Grid computing and accounting service in the context of OGSA profile. A prototype was developed and evaluated based on OGSA accounting-related standards enabling sharing accounting data in a multi-Grid environment, the World-wide Large Hadron Collider Grid (WLCG). Based on this prototype and lessons learned, a generic middleware solution was also implemented as a toolkit that eases migration of existing accounting system to be standard compatible.
102

Measurements of B± meson production at LHCb and characterisation of hybrid photon detectors

Young, Ross Donaldson January 2012 (has links)
LHCb is an experiment designed to make precision measurements of Charge- Parity violation in the B meson system. We report a measurement of the B± crosssection and production asymmetry, using B± → J/u K± decays collected at the LHCb detector in 2010 and 2011. Using 27.6 pb-1 of pp collisions at a centre-of-mass energy 7 TeV, we obtain a B± cross-section of [41.6 ± 0.6 (stat.) ± 3.0 (sys.) ± 4.2 (lumi.)] μb in the rapidity region 2 to 4.5. Using 371.1 pb-1 of pp collisions at a centre-of-mass energy 7 TeV, we obtain a B± production asymmetry of [-2.09 ± 1.20 ± 0.8 (CP) ]% in the same rapidity region. The Ring Imaging Cherenkov system of LHCb uses Hybrid photon detectors (HPDs) for single photon detection. This thesis summarises the use of ion feedback measurements as indicators of HPD vacuum quality.
103

LHCb hybrid photon detectors and sensitivity to flavour specific asymmetry in neutral B-Meson mixing

Lambert, Robert William January 2009 (has links)
The Large Hadron Collider started operation this year, 2008. LHCb is a precision heavy-flavour experiment at this collider. The precision of LHCb is greatly aided by the LHCb Ring Imaging Cherenkov system for the separation and identification of charged hadrons. This system uses pixel Hybrid Photon Detectors, an innovative new technology for single photon imaging. The simulation and testing of these photon detectors are reported and discussed. The photodetectors were measured to have reached or exceeded the specifications in key areas. In particular, the detector quantum efficiencies far exceed expectations, by a relative 27 %. The precision of LHCb will be used to examine CP-violation and rare decays of B-mesons. A key part of the physics programme will be a measurement of the CP-violating flavour specific asymmetry in neutral B-meson mixing. This asymmetry is expected to be very small in the Standard Model, of order 10-4, however it is very sensitive to new physics, which can increase the asymmetry dramatically. We present an improved event selection and a novel method to control systematics. This will enable us to make a world-leading measurement of this parameter in one nominal year of data taking (2 fb-1).
104

Search for Pair-Produced Supersymmetric Top Quark Partners with the ATLAS Experiment

Abulaiti, Yiming January 2016 (has links)
Searches for the supersymmetric partner of the top quark (stop) are motivated by natural supersymmetry, where the stop has to be light to cancel the large radiative corrections to the Higgs boson mass. This thesis presents three different searches for the stop at √s = 8 TeV and √s = 13 TeV using data from the ATLAS experiment at CERN’s Large Hadron Collider. The thesis also includes a study of the primary vertex reconstruction performance in data and simulation at √s = 7 TeV using tt and Z events. All stop searches presented are carried out in final states with a single lepton, four or more jets and large missing transverse energy. A search for direct stop pair production is conducted with 20.3 fb−1 of data at a center-of-mass energy of √s = 8 TeV. Several stop decay scenarios are considered, including those to a top quark and the lightest neutralino and to a bottom quark and the lightest chargino. The sensitivity of the analysis is also studied in the context of various phenomenological MSSM models in which more complex decay scenarios can be present. Two different analyses are carried out at √s = 13 TeV. The first one is a search for both gluino-mediated and direct stop pair production with 3.2 fb−1 of data while the second one is a search for direct stop pair production with 13.2 fb−1 of data in the decay scenario to a bottom quark and the lightest chargino. The results of the analyses show no significant excess over the Standard Model predictions in the observed data. Consequently, exclusion limits are set at 95% CL on the masses of the stop and the lightest neutralino.
105

The development of a fast intra-train beam-based feedback system capable of operating on the bunch trains of the International Linear Collider

Bett, Douglas Robert January 2013 (has links)
This thesis will describe the latest work from the Feedback On Nanosecond Timescales project, commonly known as FONT. The goal of the FONT project is the development of a beamline feedback system to be installed at the interaction point (IP) of a future linear collider in order to maximize the luminosity that can be achieved. The prototype FONT feedback system is beam-based, meaning that the correction is determined from direct measurement of the position of the beam, and intra-train, meaning that the correction is applied within the duration of the current bunch train. The FONT system, consisting of three stripline beam position monitors, a digital processor unit built around a Field Programmable Gate Array (FPGA) and a pair of electromagnetic kickers, is described. Recent improvements to the position measurement process are detailed and the performance of the feedback system is presented. The modification of the firmware to operate on a machine with a large number of bunch trains, such as the International Linear Collider, is described and the design is verified through the use of a laboratory test bench developed to simulate such a machine. The FONT5 digital board is proved capable of operating on a train resembling the specification for the International Linear Collider: 2820 bunches separated in time by 308 ns.
106

Higher order QCD corrections to diboson production at hadron colliders

Rontsch, Raoul Horst January 2012 (has links)
Hadronic collider experiments have played a major role in particle physics phenomenology over the last few decades. Data recorded at the Tevatron at Fermilab is still of interest, and its successor, the Large Hadron Collider (LHC) at CERN, has recently announced the discovery of a particle consistent with the Standard Model Higgs boson. Hadronic colliders look set to guide the field for the next fifteen years or more, with the discovery of more particles anticipated. The discovery and detailed study of new particles relies crucially on the availability of high-precision theoretical predictions for both the signal and background processes. This requires observables to be calculated to next-to-leading order (NLO) in perturbative quantum chromodynamics (QCD). Many hadroproduction processes of interest contain multiple particles in the final state. Until recently, this caused a bottleneck in NLO QCD calculations, due to the difficulty in calculating one-loop corrections to processes involving three or more final state particles. Spectacular developments in on-shell methods over the last six years have made these calculations feasible, allowing highly accurate predictions for final state observables at the Tevatron and LHC. A particular realisation of on-shell methods, generalised unitarity, is used to compute the NLO QCD cross-sections and distributions for two processes: the hadroproduction of W<sup>+</sup> W<sup>-</sup>jj, and the hadroproduction of W<sup>+</sup> W<sup>-</sup>jj. The NLO corrections to both processes serve to reduce the scale dependence of the results significantly, while having a moderate effect on the central scale choice cross-sections, and leaving the shapes of the kinematic distributions mostly unchanged. Additionally, the gluon fusion contribution to the next-to-next-to-leading order (NNLO) QCD corrections to W<sup>+</sup> W<sup>-</sup>j productions are studied. These contributions are found to be highly depen- dent on the kinematic cuts used. For cuts used in Higgs searches, the gluon fusion effect can be as large as the NLO scale uncertainty, and should not be neglected. All of the higher-order QCD corrections increase the accuracy and reliability of the theoretical predictions at hadronic colliders.
107

Measurement of the Zγγ production cross section at proton-proton collisions with the CMS experiment / Measurement of the Zgammagamma production cross section at proton-proton collisions with the CMS experiment

McBride, Sachiko Toda January 1900 (has links)
Doctor of Philosophy / Department of Physics / Yurii Y. Maravin / This thesis presents the first study of a rare production of Z boson in association with two photons (Zγγ), where the Z boson decays into a pair of muons or electrons, by proton-proton collisions at the Large Hadron Collider (LHC). This study uses full data samples that have been collected with the Compact Muon Solenoid (CMS) detector in 2012 with a center of mass energy of 8TeV, corresponding to an integrated luminosity of 19.7 fb⁻¹. The Zγγ production cross section is measured within a fiducial region defined by two leptons with two photons where transverse momentum over 15 GeV and distance between gamma and lepton above 0.4. Using the obtained samples, the Zγγ cross section is measured to be: 12.6 ±1.6 (stat.) ± 1.7 (syst.) ± 0.3 (lumi.) fb. where stat., syst., and lumi. denote the statistical uncertainty, systematic uncertainty, and the uncertainty in integrated luminosity, respectively. This result is in an excellent agreement with the theoretical prediction of 13.0 ± 1.5 fb.
108

The development of missing transverse momentum reconstruction with the ATLAS detector using the PUfit algorithm in pp collisions at 13 TeV

Li, Zhelun 19 August 2019 (has links)
Many interesting physical processes produce non-interacting particles that could only be measured using the missing transverse momentum. The increase of the proton beam intensity in the Large Hadron Collider (LHC) provides sensitivity to rare physics processes while inevitably increasing the number of simultaneous proton collisions in each event. The missing transverse momentum (MET) is a variable of great interest, defined as the negative sum of the transverse momentum of all visible particles. The precision of the MET determination deteriorates as the complexity of the recorded data escalates. Given the current complexity of data analysis, a new algorithm is developed to effectively determine the MET. Several well-understood physics processes were used to test the effectiveness of the newly designed algorithm. The performance of the new algorithm is also compared to that of the standard algorithm used in the ATLAS experiment. / Graduate
109

On the use of heterogenous computing in high-energy particle physics at the ATLAS detector

Sacks, Marc January 2017 (has links)
A dissertation submitted in fulfillment of the requirements for the degree of Master of Physics in the School of Physics November 1, 2017. / The ATLAS detector at the Large Hadron Collider (LHC) at CERN is undergoing upgrades to its instrumentation, as well as the hardware and software that comprise its Trigger and Data Acquisition (TDAQ) system. The increased energy will yield larger cross sections for interesting physics processes, but will also lead to increased artifacts in on-line reconstruction in the trigger, as well as increased trigger rates, beyond the current system’s capabilities. To meet these demands it is likely that the massive parallelism of General-Purpose Programming with Graphic Processing Units (GPGPU) will be utilised. This dissertation addresses the problem of integrating GPGPU into the existing Trigger and TDAQ platforms; detailing and analysing GPGPU performance in the context of performing in a high-throughput, on-line environment like ATLAS. Preliminary tests show low to moderate speed-up with GPU relative to CPU, indicating that to achieve a more significant performance increase it may be necessary to alter the current platform beyond pairing suitable GPUs to CPUs in an optimum ratio. Possible solutions are proposed and recommendations for future work are given. / LG2018
110

Medidas da produção de J/, \' e polarização de J em colisões p+p a s = 200 GeV com o detector PHENIX / Measurement of J/´psi´e ´psi´ production and J/´psi polarization im p+p collisions at ´(s pot.1/2)´ with the phenix detector

Donadelli, Marisílvia 08 May 2009 (has links)
A produção de J/, e 0 em colisões p+p a uma energia no referencial do centro de massa (ps) de 200 GeV foi estudada no Experimento PHENIX no RHIC. A amostra de dados coletada durante o período de aquisição de 2006 permitiu não somente a determinação das seções de choque absolutas, mas também o estudo da polarização de J/, através de seu decaimento no canal de dieltrons em região de rapidez central. As medidas incluem a dependência com o momento transverso e são comparadas com aquelas de outros experimentos em diferentes intervalos de rapidez e energias de colisão, e com previsões teóricas. A medida da polarização de J/, deve trazer limitações aos mecanismos de formação de charmonium e a medida de feed-down de J/, s provenientes de ´ é de importância para o entendimento da produção prompt de J/, assim como para a supressão observada em colisões A+A no RHIC. / The production of J/´psi´e ´psi´collisions at the nucleon center of mass energy ´(s pot.1/2)´ have been studied in the PHENIX Experiment at RHIC. The sample collected during the 2006 data taking period allowed not only the determination of absolute cross sections but also the study of J= polarization through its decays into the dielectron channel at mid rapidity. The measurements include transverse momentum dependence and are compared to that of other experiments in different rapidity ranges and collision energies and to theoretical model predictions. The J= polarization results should provide a constraint on charmonium formation mechanisms and the measurement of the feed down of 0 to J= is of importance for understanding prompt J= production as well as the suppression observed in A+A collisions at RHIC.

Page generated in 0.0538 seconds