Spelling suggestions: "subject:"then ATLAS experiment"" "subject:"them ATLAS experiment""
1 |
Hierarchical control of the Atlas experiment.Barriuso Poy, Alejandro 14 May 2007 (has links)
Hierarchical Control of the ATLAS experimentÀlex Barriuso PoyEls sistemes de control emprats en nous experiments de física d'altes energies són cada dia més complexos a conseqüència de la mida, volum d'informació i complexitat inherent a la instrumentació del detectors. En concret, aquest fet resulta visible en el cas de l'experiment ATLAS (A Toroidal LHC ApparatuS) situat dins del nou accelerador de partícules LHC (Large Hadron Collider) al CERN. ATLAS és el detector de partícules més gran mai construït fruit d'una col·laboració internacional on participen més de 150 instituts i laboratoris d'arreu del món. L'experiment estudia col·lisions protó-protó i, seguint l'estructura clàssica d'un detector de partícules, es composa d'una sèrie de sub-detectors especialitzats i d'un sistema d'imants superconductors que confereixen camp magnètic a l'experiment.Concernent l'operació de ATLAS, existeixen dos sistemes integradors principals. Per una banda, el sistema DAQ (Data AdQuisition) realitza l'adquisició de dades per els conseqüents estudis de física. Per altra banda, el DCS (Detector Control System) s'encarrega d' assegurar la coherent operació de tot l'experiment. Tot i ser dos sistemes independents, ambdós es complementen. Mentre un gestiona les dades utilitzades per als consegüents estudis de física, l'altre gestiona tota la infrastructura relacionada amb l'estat operacional del detector assegurant així la correcta extracció de informació.El DCS, principal argument d'aquesta tesi, supervisa tot el hardware dins al complex de l'experiment incloent tots els serveis dels sub-detectors (ex. alta i baixa tensió, refrigeració, etc.) i la infrastructura general de l'experiment (ex. condicions ambientals). El DCS també és la interfície amb els sistemes externs a l'experiment com per exemple els serveis tècnics CERN (ex. ventilació o electricitat) o, encara més crucial, amb l'accelerador LHC o el DAQ de ATLAS. En total, al voltant de 200.000 canals d'informació seran supervisats en tot moment per el DCS.Un dels principals problemes existents en anteriors experiments era la manca d'estandardització en moltes àrees. Per exemple, degut a l'escenari tècnic de l'època, els sistemes de control a l'era LEP (1989-2000) utilitzaven diferents llenguatges de programació, diferents protocols de comunicació i hardware 'fet a mida'. Com a conseqüència, el desenvolupament i manteniment del DCS era en molts casos una tasca difícil. Amb la intenció de solventar els problemes del passat, el projecte JCOP va ser creat al CERN a finals de 1997. Els diferents sub-detectors de ATLAS (així com dels 3 altres principals experiments del LHC) estan composats de múltiples equips de persones treballant en paral·lel. L'objectiu principal del JCOP és treballar en comú per reduir duplicitat i, al mateix temps, facilitar la integració i futur manteniment dels experiments. D'aquesta manera, components sovint utilitzats per al control de plantes industrials com PLCs, 'fieldbuses', el protocol OPC o SCADA han estat instaurats i són utilitzats amb èxit als experiments. Al mateix temps, el JCOP combina els productes comercials existents amb elements hardware i software específicament creats per al seu ús dins el món del control d'experiments de física d'altes energies. Aquest és el cas del software anomenat FSM (Finite State Machine).El modelatge i integració dels molts dispositius distribuïts que coexisteixen al DCS es realitza utilitzant la FSM. D'aquesta manera, el control s'estableix mitjançant entitats software distribuïdes, autònomes i cooperatives que són organitzades jeràrquicament i segueixen una lògica de màquines finites d'estats. L'eina FSM combina dues tecnologies principals: SMI++ (State Manager Interface toolkit) i un producte SCADA comercial. SMI++ (escrit en C++) ja ha estat utilitzat amb èxit en dos experiments de física d'altes energies anteriors a ATLAS proveint la següent funcionalitat: un llenguatge orientat a objectes, una lògica de màquina finita d'estats, un sistema expert basat en regles, i un protocol de comunicació independent de la plataforma utilitzada. Aquesta funcionalitat s'aplica doncs a tots els nivells d'operació/abstracció de l'experiment (ex. des d'una vàlvula d'un sistema de refrigeració fins a tot ATLAS). Així i, basant-se en regles establertes i acurades inter-connexions que organitzen els objectes jeràrquicament, s'assoleix l'automatització global de l'experiment.Aquesta tesi presenta la integració del ATLAS DCS dins una jerarquia de control seguint la segmentació natural de l'experiment en sub-detectors i sub-sistemes. La integració final dels molts sistemes que formen el DCS a ATLAS inclou tasques com: l'organització del software de control, la identificació de models dels processos, l'automatització de processos, la detecció d'errors, la sincronització amb DAQ, i la interfície amb l'usuari.Tot i que l'experiència adquirida al passat amb la utilització de SMI++ és bon punt de partença per al disseny de la jerarquia de control de ATLAS, nous requisits han aparegut degut a la complexitat i mida de l'experiment. Així, l'escalabilitat de l'eina ha estat estudiada per afrontar el fet de què la jerarquia de control final a ATLAS serà centenars de cops més gran que cap dels dos antecedents existents. Una solució comú per a tots els sistemes que formen el DCS ha estat creada amb el principal objectiu d'assolir una certa homogeneïtat entre les diferents parts. Així, una arquitectura basada en 3 nivells funcionals organitza els sistemes pertanyents als 12 sub-detectors de l'experiment. Seguint aquesta arquitectura, les diferents funcions i parts del DCS han estat modelades amb una 'granularitat' similar entre sub-detectors, la qual cosa, ens ha portat a l'obtenció de jerarquies de control isomorfes.La detecció, monitorització i diagnòstic d'errors és una part essencial per l'operació i coordinació de tasques de qualsevol experiment de física d'altes energies o planta industrial. La presència d'errors al sistema distorsiona l'operació i pot invalidar els càlculs realitzats per a la recerca de física. Per aquest motiu, una estratègia estàndard i una interfície estàndard amb l'usuari han estat definides donant èmfasi a la ràpida detecció, monitorització i diagnòstic dels errors basant-se en un mecanisme dinàmic de tractament d'errors. Aquests nou mecanisme es basa en la creació de dos camins de comunicació (o jerarquies paral·leles) que, al mateix temps que tracten els errors, donen una descripció més clara de les condicions d'operació de l'experiment. Així, un dels camins de comunicació està poblat per objectes dedicats a la detecció i anàlisi dels errors, mentre a l'altre, els objectes comanden l'operació de l'experiment. Aquests dos camins paral·lels cooperen i contenen la lògica que descriu l'automatització de processos al DCS. Així, els diferents objectes segueixen unes màquines finites de estats preestablertes per ATLAS que faciliten la comprensió i futur desenvolupament del DCS. A més, el fet de què l'estratègia proposada agrupi i resumi els errors d'una forma jeràrquica, facilita notablement l'anàlisi d'aquests errors en un sistema de la mida d'ATLAS. L'estratègia proposada, modular i distribuïda, ha estat validada mitjançant nombrosos tests. El resultat ha estat una substancial millora en la funcionalitat mantenint, al mateix temps, unacorrecta gestió dels recursos existents. Aquesta estratègia ha estat implementada amb èxit i constitueix l'estàndard emprat a ATLAS per a la creació de la jerarquia de control.Durant l'operació de l'experiment, el DCS s'ha de sincronitzar amb els sistema DAQ a càrrec del procés de presa de dades per als conseqüents estudis de física. L'automatització de processos d'ambdós sistemes, DAQ i DCS, segueixen una lògica similar basada en una jerarquia de màquines finites d'estats (similituds i diferències han estat identificades i presentades). Tot i això, la interacció entre els dos principals sistemes integradors de ATLAS ha estat fins el moment limitada, però aproximant-se a l'inici d'operacions, esdevé cada dia més important. Així, un mecanisme de sincronització que estableix connexions entre els diferents segments dels sistema DAQ i la jerarquia de control del DCS ha estat desenvolupat. La solució adoptada insereix automàticament objectes SMI++ dins la jerarquia de control del DCS. Aquests objectes permeten a les aplicacions del DAQ comandar diferents seccions del DCS d'una forma independent i transparent. Al mateix temps, el mecanisme no permet prendre dades per física quan una part del detector funciona d'una forma incorrecta evitant així l'extracció d'informació corrupta mentre l'experiment torna a un estat segur. Un prototip que assoleix la sincronització dels dos sistemes ha estat implementat i validat, i ja està llest per a ésser utilitzat durant la integració dels sub-detectors.Finalment, la interfície situada a la sala de control entre el DCS i l'usuari ha estat implementada. D'aquesta manera, es completa la integració de les diferents parts del DCS. Els principals reptes solventats durant les fases de disseny i desenvolupament de la interfície han estat: permetre a l'operador controlar un procés de la mida de ATLAS, permetre la integració i manteniment dels molts diferents 'displays' d'operador que pertanyen als diferents sub-detectors i, donar la possibilitat a l'operador de navegar ràpidament entre les diferents parts del DCS. Aquestes qüestions han estat solventades combinant la funcionalitat del sistema SCADA amb la eina FSM. La jerarquia de control es utilitzada per la interfície per estructurar d'una forma intuïtiva els diferent 'displays' que formen el DCS. Llavors, tenint en compte que cada node de la jerarquia representa una porció susceptible de ser controlada independentment, hem assignat a cada node un 'display' que conté la informació del seu nivell d'abstracció dins la jerarquia. Tota la funcionalitat representada dins la jerarquia de control és accessible dins els 'displays' SCADA mitjançant dispositius gràfics especialment implementats. Utilitzant aquest dispositius gràfics, per una banda possibilitem que els diferents 'displays' s'assimilin en la seva forma, i així, facilitem la comprensió i utilització de la interfície per part del usuari. Per altra banda, els estats, transicions i accions que han estat definits per els objectes SMI++ són fàcilment visibles dins la interfície. D'aquesta manera, en cas de una possible evolució del DCS, el desenvolupament necessari per adequar la interfície es redueix notablement. A més, un mecanisme de navegació ha estat desenvolupat dins la interfície fent accessible a l'operador ràpidament qualsevol sistema dins la jerarquia. La jerarquia paral·lela dedicada al tractament d'errors també és utilitzada dins la interfície per filtrar errors i accedir als sistemes en problemes de una manera eficient. La interfície és suficientment modular i flexible, permet ésser utilitzada en nous escenaris d'operació, resol les necessitats de diferents tipus d'usuaris i facilita el manteniment durant la llarga vida de l'experiment que es preveu fins a 20 anys. La consola està sent utilitzada des de ja fa uns mesos i actualment totes les jerarquies dels sub-detectors estan sent integrades. / Hierarchical Control of the ATLAS experimentÀlex Barriuso PoyControl systems at High Energy Physics (HEP) experiments are becoming increasingly complex mainly due to the size, complexity and data volume associated to the front-end instrumentation. In particular, this becomes visible for the ATLAS experiment at the LHC accelerator at CERN. ATLAS will be the largest particle detector ever built, result of an international collaboration of more than 150 institutes. The experiment is composed of 9 different specialized sub-detectors that perform different tasks and have different requirements for operation. The system in charge of the safe and coherent operation of the whole experiment is called Detector Control System (DCS).This thesis presents the integration of the ATLAS DCS into a global control tree following the natural segmentation of the experiment into sub-detectors and smaller sub-systems. The integration of the many different systems composing the DCS includes issues such as: back-end organization, process model identification, fault detection, synchronization with external systems, automation of processes and supervisory control.Distributed control modeling is applied to the widely distributed devices that coexist in ATLAS. Thus, control is achieved by means of many distributed, autonomous and co-operative entities that are hierarchically organized and follow a finite-state machine logic.The key to integration of these systems lies in the so called Finite State Machine tool (FSM), which is based on two main enabling technologies: a SCADA product, and the State Manager Interface (SMI++) toolkit. The SMI++ toolkit has been already used with success in two previous HEP experiments providing functionality such as: an object-oriented language, a finite-state machine logic, an interface to develop expert systems, and a platform-independent communication protocol. This functionality is then used at all levels of the experiment operation process, ranging from the overall supervision down to device integration, enabling the overall sequencing and automation of the experiment.Although the experience gained in the past is an important input for the design of the detector's control hierarchy, further requirements arose due to the complexity and size of ATLAS. In total, around 200.000 channels will be supervised by the DCS and the final control tree will be hundreds of times bigger than any of the antecedents. Thus, in order to apply a hierarchical control model to the ATLAS DCS, a common approach has been proposed to ensure homogeneity between the large-scale distributed software ensembles of sub-detectors. A standard architecture and a human interface have been defined with emphasis on the early detection, monitoring and diagnosis of faults based on a dynamic fault-data mechanism. This mechanism relies on two parallel communication paths that manage the faults while providing a clear description of the detector conditions. The DCS information is split and handled by different types of SMI++ objects; whilst one path of objects manages the operational mode of the system, the other is dedicated to handle eventual faults. The proposed strategy has been validatedthrough many different tests with positive results in both functionality and performance. This strategy has been successfully implemented and constitutes the ATLAS standard to build the global control tree.During the operation of the experiment, the DCS, responsible for the detector operation, must be synchronized with the data acquisition system which is in charge of the physics data taking process. The interaction between both systems has so far been limited, but becomes increasingly important as the detector nears completion. A prototype implementation, ready to be used during the sub-detector integration, has achieved data reconciliation by mapping the different segments of the data acquisition system into the DCS control tree. The adopted solution allows the data acquisition control applications to command different DCS sections independently and prevents incorrect physics data taking caused by a failure in a detector part.Finally, the human-machine interface presents and controls the DCS data in the ATLAS control room. The main challenges faced during the design and development phases were: how to support the operator in controlling this large system, how to maintain integration across many displays, and how to provide an effective navigation. These issues have been solved by combining the functionalities provided by both, the SCADA product and the FSM tool. The control hierarchy provides an intuitive structure for the organization of many different displays that are needed for the visualization of the experiment conditions. Each node in the tree represents a workspace that contains the functional information associated with its abstraction level within the hierarchy. By means of an effective navigation, any workspace of the control tree is accessible by the operator or detector expert within a common human interface layout. The interface is modular and flexible enough to be accommodated to new operational scenarios, fulfil the necessities of the different kind of users and facilitate the maintenance during the long lifetime of the detector of up to 20 years. The interface is in use since several months, and the sub-detector's control hierarchies, together with their associated displays, are currently being integrated into the common human-machine interface.
|
2 |
A measurement of the low mass Drell-Yan differential cross section in the di-muon channel with √s = 7 TeV proton-proton collisions at the ATLAS experimentGoddard, Jack Robert January 2014 (has links)
A measurement of the Drell-Yan differential cross section at low invariant mass is presented in the di-muon channel. A 1.64 pb−1 dataset of √s = 7 TeV proton-proton collision data collected by the ATLAS experiment at the LHC is used. The measurement is made in an invariant mass range of 26 < M < 66 GeV where M is the invariant mass of the muon pair. A review of the relevant theoretical physics and the ATLAS detector is made. The analysis is described with particular attention paid to the determination of the isolation efficiency corrections for the Monte Carlo and the estimate of the multijet background. The fiducial differential cross section is calculated with a statistical uncertainty that varies between 0.8% and 1.2%. The systematic uncertainty is seen to vary between 2.4% and 4.1%. A cross section extrapolated to the full phase space is also presented. This is dominated by theoretical uncertainties from the variation of the factorisation and renormalisation scales. The obtained fiducial differential mass cross section is compared to theoretical predictions at NLO and NNLO in perturbative QCD. It is shown that a move beyond NLO is needed to describe the distribution well due to the restrictions of using a fixed order theoretical prediction. A combination with the electron channel measurement is also briefly discussed as well as comparisons to a di-muon measurement in an extended invariant mass range. This allows similar, but stronger conclusions to be drawn. A discussion is made of a PDF fit that uses the measurement presented here. The fit demonstrates the impact of the measurement on the PDFs and further supports the conclusion that a move to NNLO in pQCD is needed to describe the data.
|
3 |
The Discovery Potential of Neutral Supersymmetric Higgs Bosons with Decay to Tau Pairs at the ATLAS ExperimentSchaarschmidt, Jana 07 April 2011 (has links) (PDF)
This work presents a study of the discovery potential for the neutral supersymmetric Higgs bosons h/A/H decaying to tau pairs with the ATLAS experiment at the LHC. The study is based on Monte Carlo samples which are scaled to state-of-the-art cross sections. The analyses are designed assuming an integrated luminosity of 30 1/fb and a center-of-mass energy of sqrt(s) = 14 TeV. The results are interpreted in the mmax h benchmark scenario.
Two final states are analyzed: The dileptonic channel where the two tau leptons decay to electrons or muons and the lepton-hadron channel where one tau decays to an electron or muon and the other tau decays to hadrons. The study of the dilepton channel is based completely on the detailed ATLAS simulation, the analysis of the lepton-hadron channel is based on the fast simulation.
The collinear approximation is used to reconstruct the Higgs boson mass and its performance is studied. Cuts are optimized in order to discriminate the signal from background and to maximize the discovery potential given a certain Higgs boson mass hypothesis. In the lepton-hadron channel the selection is split into two analyses depending on the number of identified b-jets. Procedures to estimate the dominant backgrounds from data are studied. The shape and normalization of the Z to tautau background are estimated from Z to leptonlepton control regions. The ttbar contributions to the signal regions are estimated from ttbar control regions.
The individual analyses are combined and sensitivity predictions are made depending on the Higgs boson mass mA and the coupling parameter tanβ. The light neutral MSSM Higgs bosons with mA = 150 GeV can be discovered when at least tanbeta = 11 is realized in nature. The heavy neutral MSSM Higgs bosons with mA = 800 GeV can be discovered for tanbeta ≥ 44. However, due to the large width of the reconstructed Higgs boson mass and the mass degeneration, only the sum of at least two of the three Higgs boson signals will be visible.
|
4 |
Vector Boson Scattering and Electroweak Production of Two Like-Charge W Bosons and Two Jets at the Current and Future ATLAS DetectorSchnoor, Ulrike 22 May 2015 (has links) (PDF)
The scattering of electroweak gauge bosons is closely connected to the electroweak gauge symmetry and its spontaneous breaking through the Brout-Englert-Higgs mechanism. Since it contains triple and quartic gauge boson vertices, the measurement of this scattering process allows to probe the self-interactions of weak bosons. The contribution of the Higgs boson to the weak boson scattering amplitude ensures unitarity of the scattering matrix. Therefore, the scattering of massive electroweak gauge bosons is sensitive to deviations from the Standard Model prescription of the electroweak interaction and of the properties of the Higgs boson.
At the Large Hadron Collider (LHC), the scattering of massive electroweak gauge bosons is accessible through the measurement of purely electroweak production of two jets and two gauge bosons. No such process has been observed before. Being the channel with the least amount of background from QCD-mediated production of the same final state, the most promising channel for the first measurement of a process containing massive electroweak gauge boson scattering is the one with two like-charge W bosons and two jets in the final state. This thesis presents the first measurement of electroweak production of two jets and two identically charged W bosons, which yields the first observation of a process with contributions from quartic gauge interactions of massive electroweak gauge bosons.
An overview of the most important issues in Monte Carlo simulation of vector boson scattering processes with current Monte Carlo generators is given in this work. The measurement of the final state of two jets and two leptonically decaying same-charge W bosons is conducted based on proton-proton collision data with a center-of-mass energy of √s = 8 TeV, taken in 2012 with the ATLAS experiment at the LHC. The cross section of electroweak production of two jets and two like-charge W bosons is measured with a significance of 3.6 standard deviations to be σ(W± W±jj−EW[fiducial]) = 1.3 ± 0.4(stat.) ± 0.2(syst.) fb in a fiducial phase space region selected to enhance the contribution from W W scattering. The measurement is compatible with the Standard Model prediction of σ(W±W± jj−EW[fiducial]) = 0.95 ± 0.06 fb. Based on this measurement, limits on anomalous quartic gauge couplings are derived. The effect of anomalous quartic gauge couplings is simulated within the framework of an effective chiral Lagrangian unitarized with the K-matrix method. The limits for the anomalous coupling parameters α4 and α5 are found to be −0.14 < α4 < 0.16 and −0.23 < α5 < 0.24 at 95 % confidence level.
Furthermore, the prospects for the measurement of the electroweak production of two same-charge W bosons and two jets within the Standard Model and with additional doubly charged resonances after the upgrade of the ATLAS detector and the LHC are investigated. For a high-luminosity LHC with a center-of-mass energy of √s = 14 TeV, the significance of the measurement with an integrated luminosity of 3000 fb^−1 is estimated to be 18.7 standard deviations. It can be improved by 30 % by extending the inner tracking detector of the atlas experiment up to an absolute pseudorapidity of |η| = 4.0.
|
5 |
The Discovery Potential of Neutral Supersymmetric Higgs Bosons with Decay to Tau Pairs at the ATLAS ExperimentSchaarschmidt, Jana 15 November 2010 (has links)
This work presents a study of the discovery potential for the neutral supersymmetric Higgs bosons h/A/H decaying to tau pairs with the ATLAS experiment at the LHC. The study is based on Monte Carlo samples which are scaled to state-of-the-art cross sections. The analyses are designed assuming an integrated luminosity of 30 1/fb and a center-of-mass energy of sqrt(s) = 14 TeV. The results are interpreted in the mmax h benchmark scenario.
Two final states are analyzed: The dileptonic channel where the two tau leptons decay to electrons or muons and the lepton-hadron channel where one tau decays to an electron or muon and the other tau decays to hadrons. The study of the dilepton channel is based completely on the detailed ATLAS simulation, the analysis of the lepton-hadron channel is based on the fast simulation.
The collinear approximation is used to reconstruct the Higgs boson mass and its performance is studied. Cuts are optimized in order to discriminate the signal from background and to maximize the discovery potential given a certain Higgs boson mass hypothesis. In the lepton-hadron channel the selection is split into two analyses depending on the number of identified b-jets. Procedures to estimate the dominant backgrounds from data are studied. The shape and normalization of the Z to tautau background are estimated from Z to leptonlepton control regions. The ttbar contributions to the signal regions are estimated from ttbar control regions.
The individual analyses are combined and sensitivity predictions are made depending on the Higgs boson mass mA and the coupling parameter tanβ. The light neutral MSSM Higgs bosons with mA = 150 GeV can be discovered when at least tanbeta = 11 is realized in nature. The heavy neutral MSSM Higgs bosons with mA = 800 GeV can be discovered for tanbeta ≥ 44. However, due to the large width of the reconstructed Higgs boson mass and the mass degeneration, only the sum of at least two of the three Higgs boson signals will be visible.
|
6 |
Monitoring and Optimization of ATLAS Tier 2 Center GoeGridMagradze, Erekle 11 January 2016 (has links)
No description available.
|
7 |
Studies of Higgs Boson signals leading to multi-photon final states with The ATLAS detectorCooper-Smith, Neil January 2011 (has links)
The efficient identification of photons is a crucial aspect in the search for the Higgs boson at ATLAS. With the high luminosity and collision energies provided by the Large Hadron Collider, rejection of backgrounds to photons is of key importance. It is often not feasible to fully simulate background processes that require large numbers of events, due to processing time and disk space constraints. The standard fast simulation program, ATLFAST-I, is able to simulate events ∼1000 times faster than the full simulation program but does not always provide enough detailed information to make accurate background estimates. To bridge the gap, a set of photon reconstruction efficiency parameterisations, for converted and unconverted photons, have been derived from full simulation events and subsequently applied to ATLFAST-I photons. Photon reconstruction efficiencies for isolated photons from fully simulated and ATLFAST-I, plus parameterisations, events are seen to agree within statistical error. A study into a newly proposed Two Higgs Doublet Model channel, gg → H → hh → γγγγ, where the light Higgs (h) boson is fermiophobic, has been investigated. The channel is of particular interest as it exploits the large production cross-section of a heavy Higgs (H) boson via gluon-fusion at the LHC in conjunction with the enhanced branching ratio of a light fermiophobic Higgs (h) boson to a pair of photons. This channel is characterised by a distinct signature of four high pT photons in the final state. Samples of signal events have been generated across the (mh,mH) parameter space along with the dominant backgrounds. An event selection has been developed with the search performed at generator-level. In addition, the search was also performed with simulated ATLFAST-I events utilising the above photon reconstruction efficiency parameterisations. For both analyses, the expected upper limit on the cross-section at 95% confidence level is determined and exclusion regions of the (mh,mH) parameter space are defined for integrated luminosities of 1 f b−1 and 10 f b−1 in seven fermiophobic model benchmarks.
|
8 |
Search for the Higgs Boson in the process H→ZZ→llνν produced via vector-Boson fusion with the ATLAS detectorEdwards, Clive January 2012 (has links)
The search potential of a Standard Model Higgs boson in the Vector Boson Fusion production mechanism with Higgs boson decaying to two leptons and two neutrinos via decay to two Z bosons with the ATLAS detector is investigated. The ATLAS detector is a general purpose detector in operation at CERN measuring proton-proton collisions produced by the Large Hadron Collider. This channel has been shown to have high sensitivity at large Higgs mass, where large amounts of missing energy in the signal provide good discrimination over expected backgrounds. This work takes a first look at whether the sensitivity of this channel may be improved using the remnants of the vector boson fusion process to pro- vide extra discrimination, particularly at lower mass where sensitivity of the main analysis is reduced because of lower missing energy. Simulated data samples at centre of mass energy 7 Te V are used to derive signal significances over the mass range between 200-600 Ge V / c2. Because of varying signal properties with mass, a low and a high mass event selection were developed and optimized. A comparison between simulated and real data (collected in 2010) is made of variables used in the analysis and the effect of pileup levels corresponding to those in the 2010 data is investigated. Possible methods to estimate some of the main backgrounds to this search are described and discussed. The impact • of important theoretical and detector related systematics are taken into account. Final results are presented in the form of 95 % Confidence Level exclusion limits on the signal cross section relative to the SM prediction as a function of Higgs boson mass, based on an integrated luminosity of 33.4 pb -1 of data collected during 2010.
|
9 |
The performance of the ATLAS missing transverse momentum high-level trigger in 2015 pp collisions at 13 TeVChiu, Justin 09 September 2016 (has links)
The performance of the ATLAS missing transverse momentum (ETmiss) high-level trigger during 2015 operation is presented. In 2015, the Large Hadron Collider operated at a higher centre-of-mass energy and shorter bunch spacing (sqrt(s) = 13 TeV and 25 ns, respectively) than in previous operation. In future operation, the Large Hadron Collider will operate at even higher instantaneous luminosity (O(10^34 cm^−2 s^−1)) and produce a higher average number of interactions per bunch crossing, <mu>. These operating conditions will pose significant challenges to the ETmiss trigger efficiency and rate. An overview of the new algorithms implemented to address these challenges, and of the existing algorithms is given. An integrated luminosity of 1.4 fb^−1 with <mu> = 14 was collected from pp collisions of the Large Hadron Collider by the ATLAS detector during October and November 2015 and was used to study the efficiency, correlation with offline reconstruction, and rates of the trigger algorithms. The performance was found to be satisfactory. From these studies, recommendations for future operating specifications of the trigger were made. / Graduate / 0798, / jchiu@uvic.ca
|
10 |
Search for Weakly Produced Supersymmetric Particles in the ATLAS ExperimentTylmad, Maja January 2014 (has links)
The Large Hadron Collider located at CERN is currently the most powerful particle accelerator and ATLAS is an experiment designed to exploit the high energy proton-proton collisions provided by the LHC. It opens a unique window to search for new physics at very high energy, such as supersymmetry, a postulated symmetry between fermions and bosons. Supersymmetry can provide a solution to the hierarchy problem and a candidate for Dark Matter. It also predicts the existence of new particles with masses around 1 TeV, thus reachable with the LHC. This thesis presents a new search for supersymmetry in a previously unexplored search channel, namely the production of charginos and neutralinos directly decaying to electroweak on-shell gauge bosons, with two leptons, jets, and missing transverse momentum in the final state. The search is performed with proton-proton collision data at a center of mass energy of √s = 8 TeV recorded with the ATLAS experiment in 2012. The design of a signal region sensitive to the new signal is presented and a data driven technique to estimate the Z+jets background is developed. Precise measurements of hadronic jet energies are crucial to search for new physics with ATLAS. A precise energy measurement of hadronic jets requires detailed knowledge of the pulse-shapes from the hadron calorimeter signals. Performance of the ATLAS Tile Calorimeter in this respect is presented using both pion test-beams and proton–proton collision data. / <p>At the time of the doctoral defense, the following papers were unpublished and had a status as follows: Paper 2 and Paper 4: Technical report from the ATLAS experiment.</p>
|
Page generated in 0.1031 seconds