• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 390
  • 50
  • 45
  • 43
  • 15
  • 9
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • Tagged with
  • 715
  • 334
  • 139
  • 139
  • 139
  • 115
  • 77
  • 76
  • 73
  • 67
  • 59
  • 59
  • 56
  • 55
  • 50
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
241

tt Analysis with Taus in the Final State

Osuna Escamilla, Carlos 03 April 2009 (has links)
El análisis que se expone en este tesis ha sido desarrollado en el contexto del experimento ATLAS, que forma parte del acelerador hadrónico de protones LHC, instalado en un tunel de 27 Km de diámetro situado en el CERN (Ginebra). ATLAS es uno de los cuatro detectores emplazados en el anillo del acelerador junto con CMS, LHCb y Alice. Ocho años después de que el acelerador electrón-positron LEP fuera desconectado, el LHC comenzará a tomar datos en noviembre del 2009, convirtiéndose en el mayor acelerador de hadrones del mundo, tanto en luminosidad como en energía de colisión (14 TeV en centro de masas). Muchas de las cuestiones que LEP dejó sin respuesta, como la existencia del bosón de Higgs o la existencia de partículas más allá del modelo estandar necesitan energías de colisión del orden del TeV y una enorme luminosidad para acumular estadística suficiente en el tiempo de vida del acelerador. Con estos requisitos se ha construido el LHC, del que se espera que dé respuesta a estas y otras preguntas fundamentales de la física durante los 10 años en los que estará tomando datos. El análisis ha demostrado que el proceso de producción de un par top-antitop que luego decae semileptónicamente con un leptón tau que decae hadrónicamente puede ser observado, con un correcto filtrado de los procesos de ruido, incluso con los primeros datos del detector ATLAS. El estudio, basado en una simulación de MonteCarlo demuestra que tres meses de funcionamiento a baja luminosidad sería suficiente para observar del orden de 200 events de señal. El único requisito es una buena comprensión de diferentes componentes del detector y algoritmos de reconstrucción, lo cual se espera realizar en los primeros meses de toma de datos. / The analysis presented in this thesis has been developed in the context of the ATLAS experiment, part of the Large Hadron Collider (LHC), installed in a 27 Km diameter underground tunel placed at CERN (Geneva). ATLAS is one among four detectors installed along the main ring of the accelerator together with CMS, LHCb and Alice. Eight years after the electron-positron accelerator (LEP) was turned off, LHC will start taking data in November 2009, becoming the largest hadron accelerator of the world in luminosity and collision energy (14 TeV in center of mass). Many of the unanswered question that remain after LEP, like the existence of a Higgs boson or the existence of beyond the Standard Model particles need energies of collision of the order of the TeV and a great luminosity in order to be able to accumulate enough statistics along the lifetime of the experiment. LHC was build to fulfill these requirements, from which we expect to answer these and other fundamental questions of the physics during the next 10 years. The analysis presented in this thesis has shown that the production process of a top antitop pair that later decays semileptonically with a tau lepton that decays hadronically can be observed, with a suitable event selection to reduce the physical background, even with the first data of the ATLAS experiment. The study is based on a MonteCarlo that shows that after three months taking data a low luminosity would be enough to observe about 200 signal events. The only requirement is a good knowledge of different components of the detector and reconstruction algorithms, what is expected to be achieved within the first months of data taking.
242

Simultaneous measurement of the top quark pair production cross-section and Rb in the ATLAS experiment

Nadal Serrano, Jordi 20 July 2012 (has links)
L’objectiu d’aquesta tesi és mesurar simultàniament la secció eficaç de producció de parelles t¯t i el coeficient Rb 1 a l’experiment ATLAS de l’LHC, a Ginebra. ATLAS (A Toroidal LHC ApparatuS) és un dels quatre grans experiments que dur a terme noves investigacions sobre la matèria, l’energia i el temps en funcionament al Gran Col·lisionador d’Hadrons (LHC) al CERN. ATLAS va ser dissenyat per entendre perquè l’Univers és tal com ´es avui en dia. ´És capac¸ d’examinar els constituents més íntims de la matèria més enllà del que ho ha fet mai cap altre experiment i així poder explorar nous fenòmens fonamentals. Els descobriments dels secrets de la natura mitjançant les col·lisions de partícules dins del detector ATLAS és un desafiament científic i tecnològic sense precedents a la història. El detector ATLAS és de gran complexitat, essent el detector més gran mai construït dins el camp de la física de partícules. Hi ha diversos processos físics que tenen estats finals similars a la desintegració del top (bàsicament un leptó, quatre jets i energia transversa perduda (MET)) que considerem com a senyal. Com per exemple, la desintregració dels bosons W i Z a estats finals amb leptons i jets, esdeveniments de QCD que estan formats pràcticament de jets i MET, etc. Una bona identificació per una posterior eliminació d’aquests processos no interessants és de vital importancia per estimar amb una gran precisió els nostres esdeveniments de senyal. Un cop tenim la selecció del nostre senyal optimitzada per obtenir la màxima puressa de quarks top podem començar a fer la mesura de la secció eficaç de producció. Aquesta es fa mitjançant un ”fit” a la distribució de la massa reconstruïda del top hadrònic. Aprofitant la gran correlació que hi ha entre la secció eficac i el paràmetre de la matriu CKM (Vtb) hem inclòs aquest últim al fit, donant l’opció de buscar un límit per nova física. Per millorar els errors sistemàtics (amb 35 pb−1 era 10% a la secció eficaç) també incloem al fit els mateixos errors deixant-los lliures i són les propies dades les que redueixien l’error (amb 1 fb−1 serà 6% a la secció eficaç). Així doncs amb les dades que disposem ( 1 fb−1) podrem aconseguir la mesura més precisa (a data d’avui) per la secció eficaç del top quark i el factor Rb que dóna molt espai per interpretacions de nova física. / The goal of this thesis is the simultaneous measurement of the top quark pair cross-section and the Rb 1 in the ATLAS experiment at the LHC, the Large Hadron Collider, in the France-Swiss border near Geneva. ATLAS (A Toroidal LHC ApparatuS), the biggest detector in the LHC, is carrying out new investigations about matter, energy and time. ATLAS was built to understand the world as it is nowadays. The LHC is able to see the smallest constituent of matter far beyond any previous experiment in such a way that we can explore new fundamental phenomena of nature. From a theoretical point of view, the construction of the Standard Model started in the 1960s, giving a common framework for the new particles discovered in the SLAC and CERN colliders among others. After the discovery of the bottom quark, also member of the third quark family, physicists started looking for the top quark. There were some indirect evidences from the Z boson decays which were in agreement with theoretical predictions which included the unseen top quark. Finally, in 1995 at the Tevatron, Fermilab the top quark was discovered, being the heaviest elementary particle of the Standard Model discovered so far. Top decays are extremely complicated since all parts of the detector are required to detect them. Top decays involve leptons (electrons or muons), light jets, b-jets and missing transverse energy (MET). We are interested in the semileptonic decay formally expressed as t¯t ! q′¯qb′¯b and e or μ. There are some physics processes that also include this final state, mainly vector boson decays, but also single top and multi-jet processes which are made of light jets and MET. A good identification and a posterior purge of those not-interesting events is crucial to measure with a high precision our signal events. Once we make the optimal selection of our signal to obtain the purest selection of top quarks, we can proceed and measure the cross-section and Rb. The measurement is done through a profile fit to the reconstructed mass of the hadronic top. Including the profile technique, the systematic uncertainties are also treated as free parameter and therefore, with a high statistics sample, can be constrained by the data themselves. Finally, with 1 fb−1 of LHC data we measure the top quark cross-section with a precision of 6 %, already smaller than present theoretical error.
243

Identification of excited states in conjugated polymers

Hartwell, Lewis John January 2002 (has links)
This thesis reports quasi steady state photoinduced absorption measurements from three conjugated polymers: polypyridine (PPy), polyfluorene (PFO) and the emeraldine base (EB) form of polyaniline. The aim of these experiments was to determine the nature of the photoexcited states existing in these materials in the millisecond time domain, as this has important consequences for the operation of real devices manufactured using these materials. The results from the photoinduced absorption experiments are closely compared with published results from pulse radiolysis experiments. In all cases there is very good correspondence between the two data sets, which has enabled the photoexcited states to be assigned with a high degree of confidence. Quasi steady-state photoinduced absorption involves the measurement of the change in absorption of a material in response to optical excitation with a laser beam. The changes in absorption are small, so a dedicated instrument was developed and optimised for each different sample. Lock-in techniques were used to recover the small signals from the samples. The samples involved were thin films of the polymer spin coated onto sapphire substrates in the cases of PPy and EB. Solution state experiments were conducted on EB. The experiments on PFO were conducted on aligned and unaligned thin films provided by Sony. In the case of the aligned PFO samples, the photoinduced absorption spectrometer was modified to enable polarisation-sensitive data collection. In PPy, both triplet excitons and polarons have been shown to be long-lived photoexcitations, with photoinduced absorption features at 2.29 eV (triplet excitontransition), 1.5 eV and 0.8 eV (polaron transitions). In PFO, the one observed photoinduced band at 1.52 eV is assigned to a triplet exciton. Two photoinduced absorption bands are observed in EB, at 1.4 eV and 0.8 eV. These are assigned to aself-trapped CT singlet exciton and triplet exciton, respectively.
244

Brane probes and gauge theory/gravity dualities

Page, David C. January 2002 (has links)
We examine the use of branes as probes of supergravity geometries which arise in the study of gauge theory/gravity dualities. We investigate the moduh spaces of supersymmetric gauge theories through moduh spaces of brane probes in the dual gravity theories. Preferred coordinate systems emerge in which the supergravity geometries can readily be compared to the gauge theory and various gauge theory quantities such as anomalous scaling dimensions can be read off. We also consider the physics of certain expanded brane configurations, called giant gravitons. We identify supergravity solutions which represent coherent states of these objects. We find a degeneracy between giant graviton probes and massless particles in a broad class of supergravity backgrounds and uncover a close relationship with charged particle states in lower dimensions.
245

Development, modelling and investigation of a charge breeder for radioactive beams

Emmanouilidis, A. January 2003 (has links)
No description available.
246

Fundamental and applied measurements in ICP-MS

Carter, Julian Robert January 2002 (has links)
Fundamental and applied aspects of ICP-MS have been investigated to gain an increased understanding of the technique and improve on its analytical capabilities. Dissociation temperatures of polyatomic ions were calculated using a double-focusing sector instrument, to obtain more reliable mass spectral data with controlled vapour introduction via a Dreschel bottle to allow accurate calculation of the ingredients in the plasma. The equilibrium temperature for the plasma, operated at 1280 W calculated using CO*, and as the thermometric probes, was c.a. 5800 - 7400 K, while using ArO* and ArC* as the thermometric probes the temperature calculated was c.a 2000 - 7000 K. Calculated dissociation temperatures were used to elucidate the site of formation of these ions. Results confirmed that strongly bound ions such as CO* and C2* were formed in the plasma whereas weakly bound ions such as ArO* and ArC* were formed in the interface region due to gross deviation of the calculated temperatures from those expected for a system in thermal equilibrium. The use of helium gas in a hexapole collision cell Attenuated the signals of ArH* Ar* ArO*, Arc*, ArCl* and Ara* allowing improved determination of ^^K*, *'Ca*, ^^e* ^^Cr*, ''As* and ^°Se*in standard solutions. The use of the hexapole collision cell also resulted in an enhancement of analyte signals due to the thermalisation of the ion beam. The ion kinetic energy of ions sampled from the plasma and those sampled from the skimmer cone were determined using a modified lens stack to assess the significance for memory effects of material deposited on the skimmer cone. The most probable kinetic energy of Be* ions sampled from the skimmer cone was found to be 2A eV, which was considerably lower than the most probable kinetic energy of Be* ions sampled from the plasma, which was found to be 9.5 eV. The low kinetic energy of the ions deposited on the skimmer cone means they will only contribute to the analytical signal under certain instrumental operating conditions. The feasibility of liquid sample introduction into a LP-ICP-MS system designed for gaseous sample introduction was investigated using a particle beam separator. The low signal was attributed to the low gas kinetic temperature of the plasma which was confirmed by the fact that the signal increased rapidly with increasing temperature of the transfer line between the particle beam separator and the LP-ICP torch. This was also supported by the fact that more volatile compounds gave mass spectra whereas less volatile compounds did not. A limit of detection of 30 mg 1'^ for chlorobenzene was achieved. Finally, silicon and phosphorus speciation was performed by HPLC coupled to sectorfield ICP-MS. Silicones ranging in molecular weight from 162 g mol'^ - 16500 g mol"^ were extracted from spiked human plasma and separated by size exclusion chromatography. Limits of detection ranged from 12 ng ml"' Si* for the 162 gmol'^ silicone to 30 ng ml'' Si* for the 16500 g mol' silicone. Organophosphate pesticides were extracted from spiked plasma and separated by reversed phase chromatography. Recoveries were between 55 - 81 %. Limits of detection were 0.9 ng ml'' P* 1.8 ng ml'' P* 1.6 ng ml"' P* and 3.0 ng ml'' P* for dichlorvos, methyl parathion, malathion and quinolphos respectively. Phosphates were extracted from various food products and separated by ion-exchange chromatography. Limits of detection were 1.0 ng ml"' P* 2.3 ng ml"' P*, and 39 ng ml"' P* for P04^", PaOy"^ and PsOio^" respectively.
247

Dissociation mechanisms of photoexcited molecular ions

Cooper Inglis, Louise January 2001 (has links)
No description available.
248

Design, development, and modeling of a compton camera tomographer based on room temperature solid statepixel detector

Calderón, Yonatán 25 April 2014 (has links)
Desde el descubrimiento de los rayos X en 1895 y su primera aplicación médica un año después, se han desarrollado diferentes técnicas de imagen médica. La tomografía por emisión es una rama de las imágenes médicas que permite a los médicos realizar un seguimiento de los procesos fisiológicos en el paciente. Un compuesto radiactivo llamado radiotrazador se inyecta en el cuerpo del paciente. La molécula radiotrazador se elige para cumplir una tarea específica en el organismo que permite el seguimiento de un proceso fisiológico concreto. Los dos principales de técnicas de tomografía por emisión son PET y SPECT. En PET (Tomografía por Emisión de Positrones) el radiotrazador inyectado es un emisor de positrones. El positrón emitido se aniquila con un electrón produciendo un par de fotones gamma emitidos “back -to-back”. El escáner PET (por lo general de forma cilíndrica) detecta estos pares de fotones y reconstruye una imagen de la concentración del radiotrazador en el organismo. En SPECT (Tomografía Computarizada por Emisión Simple de Fotones) un solo fotón gammas es emitido en cada desintegración radiactiva del compuesto. El sistema de SPECT consiste en (al menos) una cámara gamma. Una cámara gamma está compuesta por un colimador mecánico y un fotodetector capaz de registrar la posición de la interacción. El colimador mecánico esta compuesto por un material denso con aperturas que sólo permite el paso de los fotones procedentes de una dirección particular. Los fotones colimados son detectados por el detector obteniendo una proyección del radiotrazador en el volumen del cuerpo del paciente. A partir de estas proyecciones se obtiene una imagen tomografía de la concentración del radiotrazador- SPECT es la técnica de tomografía de emisión más ampliamente utilizado debido a la gran variedad de radiotrazadores disponibles, y el bajo coste en comparación con PET. Sin embargo, SPECT tiene limitaciones intrínsecas debido a la colimación mecánica: baja eficiencia ya que sólo una fracción de los fotones gamma puede pasar a través del colimador, una relación proporcional inversa entre la eficiencia y la resolución de la imagen ( A mayor tamaño de las aperturas del colimador mayor será la eficiencia pero la resolución de la imagen empeorara) , y la cámara debe girar aumentando el tiempo de exposición . El concepto de cámara Compton fue concebido con el fin de superar estas limitaciones. Una cámara de Compton consta de dos detectores, llamados “scatterer” y “absorber”, trabajando en coincidencia. En un evento de coincidencia el fotón gamma (emitido por el radiotrazador) alcanza el “scatterer” y cambia de dirección como consecuencia de una interacción Compton. La gamma dispersada alcanza el “absorber" donde es absorbida en una interacción fotoeléctrica. Con la posición de ambas interacciones y las correspondientes energías, se puede reconstruir la superficie de un cono que contiene el punto desde donde el fotón gamma fue emitido. Con los conos reconstruidos a partir de varias coincidencias, es posible reconstruir una imagen de la actividad en el cuerpo del paciente. La cámara Compton tiene el potencial de superar todas las limitaciones intrínsecas de los SPECT ya que: cada gamma tiene una probabilidad de ser dispersado y producir una coincidencia, la resolución de la imagen no está vinculada a la eficiencia, y es posible obtener imágenes tridimensionales sin mover la cámara. Sin embargo, la complejidad de la reconstrucción de la imagen y los límites en la tecnología de detectores, han impedido que el concepto de cámara Compton se convertirse en un sistema de imagen médica viable. El proyecto VIP (“Voxel Imaging PET”) propone un diseño de detector de estado sólido (CdTe) con tecnología pixel para superar las limitaciones de los detectores basados en cristales de centelleo utilizados en PET. VIP cuenta con un diseño modular en el que el elemento básico es el modulo de detección. El módulo contiene los detectores de estado sólido que están segmentados en “voxels” de tamaño milimétrico. Gracias a un chip de lectura desarrollado en el proyecto, cada uno de los “voxels” del detector constituye un canal independiente para la medición de la energía, posición, y el tiempo de llegada de los fotones gamma detectados. Los módulos son apilados con el fin de formar los sectores de PET. Poner varios de estos sectores juntos, permite construir un anillo PET. Aunque el módulo VIP ha sido diseñado para el PET, la flexibilidad del diseño del módulo permite explorar otras posibles aplicaciones, como PEM (Mamografía por Emisión de positrones) y la cámara de Compton. En esta tesis se evaluará una cámara de Compton basado en el concepto detector VIP. Los detectores “scatterer” y “absorber” están compuestos de módulos de detección especial diseñados. Los módulos son apilados para crear los detectores. En el “scatterer” se utiliza Silicio como material activo a fin de maximizar la probabilidad de interacción Compton de los fotones. En el “absorber”, CdTe se utiliza como material activo con el fin de detener los fotones gamma que salen del “scatterer”. La excelente resolución en la energía de los detectores de estado sólido combinado con el tamaño milimétrico de los “voxels”, permiten obtener una alta precisión en la reconstrucción de los conos Compton que no se puede lograr con cristales de centelleo. En esta tesis vamos a utilizar simulaciones de Monte Carlo para evaluar y optimizar el diseño de la cámara Compton. Se utilizarán dos algoritmos de reconstrucción de imagen diferentes. La simulación nos permitirá obtener los parámetros geométricos óptimos, así como el rendimiento esperado de la cámara de Compton en términos de eficiencia en la detección y de resolución de la imagen. Se ha evaluado también un prototipo con menor campo de visión. / Since the discovery of the X-rays in 1895 and their first medical application one year later, many different medical imaging techniques have been developed. Emission tomography is a branch of medical imaging that allows the doctors to track physiological processes in the patient. A radioactive compound called radiotracer is injected in the body of the patient. The radiotracer molecule is chosen to fulfill an specific task in the organism allowing to track a concrete physiological process. The two main emission tomography techniques are PET and SPECT. In PET (Positron Emission Tomography) the injected radiotracer is a positron emitter. The emitted positron annihilates with an electron producing a pair of back-to-back gamma photons. The PET scanner (usually having a cylindrical shape) detects these photons pairs and reconstructs an image of the radiotracer concentration. In SPECT (Single Photon Emission Computerized Tomography) a single gamma photon is emitted in each radioactive decay of the radiotracer compound. The SPECT system consists of (at least) one gamma camera. A gamma camera is composed by a mechanical collimator and a position sensitive photodetector. The mechanical collimator consists of a thick material with holes that only allow the passing of photons coming from a particular direction. The collimated photons are detected by the photodetector obtaining a projection of the radiotracer in the volume of the patient body. A three dimensional image of the radiotracer concentration in the patient body is obtained from the projections obtained in several directions. SPECT is the most widely used emission tomography technique because of the large variety of available radiotracers, and the relative low cost when compared with PET. However, SPECT has intrinsic limitations due to the mechanical collimation: low efficiency as only a fraction of the gamma photons can pass through the collimator, an inverse proportional relationship between the efficiency and the image resolution (the bigger the collimator holes the higher the efficiency but the lower the image resolution), and the camera must be rotated increasing exposure time. The concept of Compton camera has been proposed in order to overcome those limitations. A Compton camera consists of two detectors, called scatterer and absorber, working in coincidence. In a coincidence event the gamma photon (emitted by the radiotracer) reaches the scatterer and undergoes a Compton interaction, scattering into a certain angle. The scattered gamma reaches the absorber where it undergoes a photoelectric interaction and is absorbed. Using the positions of both interactions and the corresponding deposited energies, one can reconstruct a cone surface which contains the emission point of the gamma photon. With the cones reconstructed from several coincidences, an image of the activity in the patient body can be obtained. The Compton camera has the potential to overcome all the intrinsic limitations of SPECT as: each gamma has a probability to be scattered and produce a coincidence event, the image resolution is not tied to the efficiency, and it is possible to obtain three dimensional images without moving the camera. However, the complexity of the image reconstruction and the limits in the detector technology has prevented the Compton camera concept to become a viable medical imaging system. The VIP (Voxel Imaging PET) project proposes a novel detector design based on pixelated solid state (CdTe) technology to overcome the limitations of scintillator detectors used in PET. VIP features a modular design in which the basic element is the detector module unit. The module contains the solid state detectors which are segmented in millimeter size voxels. Thanks to a dedicated read-out chip developed within the project, each one of the voxels is an independent channel for the measurement of energy, position, and time of arrival of the detected gamma photons. The module detectors are stacked in order to form PET sectors. Putting several of these sectors together leads to a seamless PET ring. Although the VIP module has been designed for PET, the flexibility of the module design allows to explore other possible applications like PEM (Positron Emission Mammography) and Compton camera. In this thesis we will evaluate a Compton camera based on the VIP detector concept. The scattering and the absorber detectors will be made from the stacking of specially designed module units. Silicon will be used as detector material in the scatterer in order to maximize the Compton interaction probability for the incoming gamma photons. In the absorber, CdTe will be used as detector material in order to stop the gamma photons emerging from the scatterer. The excellent energy resolution of the solid state detectors combined with the millimeter size of the detector voxels, result in a high accuracy in the reconstruction of the Compton cones that cannot be achieved with scintillator crystals. In this thesis we will use Monte Carlo simulations in order to evaluate and model the proposed Compton camera. Two different image reconstruction algorithms will be used. The simulation will allow us to obtain the optimal geometrical parameters as well as the expected performance of the Compton camera in terms of detection efficiency and image resolution. A smaller FOV (Field-Of-View) prototype will be also evaluated.
249

Measurement of Z/γ*+b—jet production cross section in pp collisions at √s= 1:96 TeV with the CDF detector

Ortolan, Lorenzo 16 July 2012 (has links)
Procesos en colisionadores hadronicos, como por ejemplo la producción de jets, pueden describirse con la teoría de la Cromodinámica Quántica (QCD). Descripciones precisas de procesos de producción de jets asociadas con un vector bosón tienen hoy en día gran relevancia ya que estos representan un fondo irreducible para otros procesos del Modelo Estándar y para la búsqueda de nueva física. El estudio experimental de la producción de b jets asociada con un bosón Z es crucial por varios motivos. Por un lado, este constituye la fuente de fondo más importante para la búsqueda del bosón de Higgs ligero que decae en una pareja de quarks b bbar y que es producido junto a un bosón Z. Este canal es uno de lo más importantes en la búsqueda del Higgs en el Tevatron en particular después que los nuevos resultados experimentales excluyeran la región de alta masa. Por otro lado, estados finales constituidos por b jets y un bosón Z son también fuente de contaminación en la búsqueda de nueva física como por ejemplo de supersimetria donde está permitido un acoplamiento fuerte del Higgs supersimétrico y el quark bottom. La medida de la sección eficaz de producción de b jets junto a un bosón Z ha sido realizada previamente en el Tevatron por los experimentos CDF and D0 y en LHC por CMS y ATLAS. En particular la medida de CDF se realizó utilizando solo 2 fb-1 de datos y por tanto resultó limitada por la incertidumbre estadística. Esta tesis doctoral presenta una nueva medida de la sección eficaz de Z/γ* +b jets utilizando todos los datos adquiridos por CDF durante el Run II.
 Los bosones Z/γ* son seleccionados vía su decaimiento a dos muones o dos electrones y pidiendo su masa se encuentre entre 66 y 116 GeV/c2, mientras los jets se han reconstruido con el algoritmo de MidPoint con un radio de 0.7 y requiriendo que sean centrales y de alto momento transverso.
 La sección eficaz se presenta como cociente respecto a la sección eficaz inclusiva de Z/γ y de Z/γ+jet y los resultados son comparados con predicciones al orden más bajo en teoría de perturbaciones (LO) más parton showers y del siguiente nivel (NLO) corregidas por los efectos no perturbativos como la hadronización y los relacionados con interacciones secundarias que no forman parte de la interacción fuerte. También se ha medido la sección eficaz diferencial en función del momento transverso y de la posición angular del jet y se han comparado con la predicción a NLO calculada en diversas escalas de energía y para distintas PDF . / Processes at hadron colliders, such as the production of jets, are described by the Quantum Chromodynamics theory (QCD). Precise descriptions of processes involving jets in association with a vector boson have nowadays large relevance as they represent irreducible background to other Standard Model (SM) processes and searches for new physics. The experimental study and understanding of the b-jet production in association with a Z boson are crucial for many reasons. For one side, it is the most important background for a light Higgs boson decaying into a bottom-antibottom quark pair and produced in the ZH mode. This is one of the most promising channels for the Higgs search at Tevatron in particular since the latest results have excluded the high mass region (MH > 127 GeV/c2 ). For another side the signature of b-jets and a Z boson is also background to new physics searches, such as supersymmetry, where a large coupling of the Higgs boson to bottom quarks is allowed. The production cross section measurement of b-jets in events with a Z boson has already been performed at hadron colliders, at the Tevatron by CDF and D0 experiments and are now pursued at the LHC by ATLAS and CMS. In particular the CDF measurement was performed with only 2 fb-1 and was limited by the statistical uncertainty. This PhD thesis presents a new measurement of the Z/γ*+b-jet production cross section using the complete dataset collected by CDF during the Run II. Z/γ* bosons are selected in the electron and muon decay modes and are required to have 66 < MZ < 116 GeV/c2 while jets, reconstructed with the MidPoint algorithm, have to be central (|Y| < 1.5) with pT > 20 GeV/c . The per jet cross section is measured with respect to the Z/γ* inclusive and the Z/γ*+jets cross sections. Results are compared to leading order (LO) event generator plus parton shower and next-to-leading order (NLO) predictions corrected for non perturbative effects such as hadronization and underlying event. Differential distributions as a function of jet transverse moment and jet rapidity are also presented together with the comparison to NLO pQCD predictions for different renormalization and factorization scales and various PDF sets.
250

Higgs bosons and QCD jets at two loops

Koukoutsakis, Athanasios January 2003 (has links)
In this thesis we present techniques for the calculation of two-loop integrals contributing to the virtual corrections to physical processes with three on-shell and one-off-shell external particles. First, we describe a set of basic tools that simplifyy the manipulation of complicated two-loop integrals. A technique for deriving helicity amplitudes with use of a set of projectors is demonstrated. Then we present an algorithm, introduced by Laporta, that helps reduce all possible two-loop integrals to a basic set of 'master integrals'. Subsequently, these master integrals are analytically evaluated by deriving and solving differential equations on the external scales of the process. Two-loop matrix elements and helicity amplitudes are calculated for the physical processes γ* → qqg and H → ggg respectively. Conventional Dimensional Regularization is used in the evaluation of Feynman diagrams. For both processes, the infrared singular behaviour is shown to agree with the one predicted by Catani.

Page generated in 0.0293 seconds