381 |
Hur bildas svarta hål? : Neutronstjärnor, kaonkondensation och dess konsekvenser <em>och</em> Minihål på jorden?Höglund Aldrin, Ronja January 2008 (has links)
<p><p>Med utgångspunkt från den teoretiska bakgrunden, definitionen av svarta hål och deras generella egenskaper har jag studerat villkor för bildandet av svarta hål från döende singulära stjärnor. Supernovaprocessen beskrivs tillsammans med hur neutronstjärnor kan påverkas av destabiliserande mekanismer som t.ex. kaonkondensation. Olika observationer samt alternativa teorier läggs fram som argument och motargument. Utifrån detta underlag drar jag slutsatsen att svarta hål kan existera i fler varianter än vad som hittills antagits, främst i form av s.k. lågmassiva svarta hål på 1,5-1,8 M<sub>sol</sub>.</p><p> </p><p>Vidare skildras möjligheten att producera mikroskopiska svarta hål i LHC-acceleratorn (Large Hadron Collider) i CERN, de kontroverser som omgärdar detta fenomen och de kunskaper som skulle kunna vinnas från kontrollerade observationer av sådana objekt. Den generella slutsatsen här är det ofrånkomliga mötet mellan partikelfysik och astrofysik för att få tillgång till de allra djupaste insikterna om det universum vi lever i.</p></p> / <p>Building on the theoretical background, definition of black holes and their general characteristics, I have studied some conditions for the formation of black holes from dying singular stars. The supernova process is described along with the influence on neutron stars by destabilising mechanism such as kaon condensation. Various observations as well as alternative theories are presented for argumentation. From this material I draw the conclusion that black holes can exist in more varieties than has been previously assumed, foremost in the shape of low-massive black holes with masses between 1.5 and 1.8 M<sub>sun</sub>.</p><p> </p><p>Furthermore the possibility to produce microscopic black holes in the LHC accelerator (Large Hadron Collider) at CERN is portrayed, together with the controversies that currently surround this phenomenon and the knowledge that could be won from controlled observations of such objects. The general conclusion here is the unavoidable meeting between particle physics and astrophysics in order to access the deepest insights about the Universe we inhabit.</p>
|
382 |
Study of WW decay of a Higgs boson with the ALEPH and CMS detectorsDelaere, Christophe 06 July 2005 (has links)
The Standard Model is a mathematical description of the very nature of elementary particles and their interactions, now seen as relativistic quantum fields. A key feature of the theory is the Brout-Englert-Higgs mechanism, responsible for the spontaneous symmetry breaking of the underlying gauge symmetry, and which implies the existence of a neutral Higgs particle. Searches for the Higgs boson were conducted at the Large Electron Positron collider until 2000 and are still ongoing at the Tevatron collider, but the particle has not been not observed. In order to better constrain models with an exotic electroweak symmetry breaking sector, a search for a Higgs boson decaying into a W pair is carried out with the ALEPH detector on 453 pb-1 of data collected at center-of-mass energies up to 209 GeV. The analysis is optimized for the many topologies resulting from the six-fermion final state. A lower limit at 105.8 GeV/c² on the Higgs boson mass in a fermiophobic Higgs boson scenario is obtained. The ultimate machine for the Higgs boson discovery is the Large Hadron Collider, which is being built at CERN. In order to evaluate the physics potential of the CMS detector, the WH associated production of a Higgs boson decaying into a W pair is studied. Performances of data acquisition and its sophisticated trigger system, particle identification and event reconstruction are investigated by performing a detailed analysis on simulated data. Three-lepton final states are shown to provide interesting possibilities. For an integrated luminosity of 100 fb-1, a potential signal significance of more than 5ó is obtained in the mass interval between 155 and 178 GeV/c². The corresponding precision on the Higgs boson mass and partial decay width into W pairs are evaluated. This channel also provides one of the very few possible avenues towards the discovery of a fermiophobic Higgs boson below 180 GeV/c². These studies required many original technical developments, that are also presented.
|
383 |
Measurements of photon induced processes in CMS and forward proton detection at the LHCRouby, Xavier 26 September 2008 (has links)
High energy photon induced processes at the CERN Large Hadron Collider (LHC) constitutes a unique testing ground for physics within and beyond the Standard Model of Elementary Particles. Colliding protons can interact by the exchange of one or two high energy photons, leading to very clean final state topologies. Several issues related to the study of photon interactions at the LHC are addressed in this Thesis. The detection of
forward scattered protons, after the photon exchange, requires near-beam detectors.
Developments of edgeless sensor prototypes have been realised as possible solutions for such an application. A proper design of these detectors has required developing a dedicated simulator (Hector) for the transport of charged particles particles in beamlines. Finally, the analyses of
detection in the CMS experiment of the photon-induced exclusive production of lepton pairs are presented. In view of application early from the LHC start-up, in particular for the absolute luminosity measurement – the fundamental parameter
of the LHC.
|
384 |
Study of Drell-Yan production in the di-electron channel and search for new physics at the LHCCharaf, Otman 22 October 2010 (has links)
Cette these a pour sujet la recherche de nouvelle physique et l'etude de la production Drell-Yan dans le canal di-electron a l'aide du detecteur CMS au LHC. Certaines theories au dela du Modele Standard (extra dimensions, theories de grande unification) predisent l'existence de particules massives pouvant se desintegrer en une paire d'electrons. La selection des evenements recherches est presentee et etudiee. La strategie d'analyse est introduite et testee. Enfin, l'analyse des premieres donnees a 7 TeV est decrite et les resultats sont commentes.
|
385 |
Z+jets au LHC : calibration des jets et mesure de sections efficaces avec le détecteur ATLASSauvan, Jean-Baptiste 28 September 2012 (has links) (PDF)
La recherche du boson de Higgs ainsi que celle de nouvelle physique auprès du LHC requiert une excellente compréhension des processus du Modèle Standard du fait de leurs signatures expérimentales similaires. La capacité de mesurer le plus précisément possible l'énergie des objets reconstruits dans les détecteurs est par ailleurs primordiale à la fois pour effectuer des mesures de précision et pour accroître la sensibilité des analyses à des signaux de physique au-delà du Modèle Standard. Les travaux présentés dans cette thèse s'attachent à ces deux points par l'étude d'événements contenant un ou plusieurs jets associés à un boson Z avec le détecteur ATLAS. D'une part, ces événements sont utilisés pour améliorer la calibration en énergie des jets de faible impulsion transverse, d'une importance capitale pour les analyses utilisant le dénombrement de jets ou leur veto. D'autre part la section efficace différentielle de production de ces événements est mesurée en fonction de nombreuses observables et comparée à diverses prédictions théoriques. Ces mesures pourront être utilisées pour améliorer les prédictions qui servent de modèles de bruit de fond dans des analyses sur le boson de Higgs et de recherche de physique au-delà du Modèle Standard.
|
386 |
Looking for the Charged Higgs Boson : Simulation Studies for the ATLAS ExperimentFlechl, Martin January 2010 (has links)
The discovery of a charged Higgs boson (H+) would be an unambiguous sign of physics beyond the Standard Model. This thesis describes preparations for the H+ search with the ATLAS experiment at the Large Hadron Collider at CERN. The H+ discovery potential is evaluated, and tools for H+ searches are developed and refined. The H+→τν decay mode has been known as the most promising H+ discovery channel. Within this thesis, first studies of this channel with realistic detector simulation, trigger simulation and consideration of all dominant systematic uncertainties have been performed. Although, as shown by these studies, the discovery sensitivity is significantly degraded compared to studies using a parametrized detector simulation, this channel remains the most powerful ATLAS H+ discovery mode. Future searches will rely on multivariate analysis techniques like the Iterative Discriminant Analysis (IDA) method. First studies indicate that a significant sensitivity increase can be achieved compared to studies based on sequential cuts. The largest uncertainty in H+ searches is the expected $t\bar{t}$ background contribution. It is shown that numbers obtained from simulated events could be off by a factor of two, decreasing the discovery sensitivity dramatically. In this thesis, the Embedding Method for data-driven background estimation is presented. By replacing the muon signature in $t\bar{t}$ events with a simulated τ, events which allow an estimation of the background contribution at the 10% level are obtained. The ATLAS τ identification focuses on comparably clean environments like Z and W decays. To optimize the performance in high-multiplicity events like H+→τν, tau leptons are studied in $t\bar{t}$ and pile-up events. Variables which do not show discrimination power in high-multiplicity events are identified, and in some cases similar, more powerful variables are found. This allows to recover some of the performance loss and to increase the robustness of the τ identification. For the analysis of large amounts of data produced by the ATLAS detector, seamless interoperability of the various Grid flavors is required. This thesis introduces translators to overcome differences in the information system between a number of Grid projects,and highlights important areas for future standardization.
|
387 |
Future Upgrades of the LHC Beam Screen Cooling SystemBackman, Björn January 2006 (has links)
The topic of this thesis concerns the LHC, the next large particle accelerator at CERN which will start operating in 2007. Being based on superconductivity, the LHC needs to operate at very low temperatures, which makes great demands on the cryogenic system of the accelerator. To cope with the heat loads induced by the particle beam, a beam screen cooled with forced flow of supercritical helium is used. There is an interest in upgrading the energy and luminosity of the LHC in the future and this would require a higher heat load to be extracted by the beam screen cooling system. The objective of this thesis is to quantify different ways to upgrade this system by mainly studying the effects of different pressure and temperatures levels as well as a different cooling medium, neon. For this a numerical program which simulates one-dimensional pipe flow was constructed. The frictional forces were accounted for by the empirical concept of friction factor. For the fluid properties, software using empirically made correlations was used. To validate the numerical program, a comparison with previous experimental work was done. The agreement with experimental data was good for certain flow configurations, worse for others. From this it was concluded that further comparisons with experimental data must be made in order to tell the accuracy of the mathematical model and the correlations for fluid properties used. When using supercritical helium, thermo-hydraulic instabilities may arise in the cooling loop. It was of special interest to see how well a numerical program could simulate and predict this phenomenon. It was found that the numerical program did not function for such unstable conditions; in fact it was much more sensitive than what reality is. For the beam screen cooling system we conclude that to cope with the increased heat loads of future upgrades, an increase in pressure level is needed regardless if the coolant remains helium, or is changed to neon. Increasing the pressure level also makes that the problems with thermo-hydraulic instabilities can be avoided. Of the two coolants, helium gave the best heat extraction capacity. Unlike neon, it is also possible to keep the present temperature level when using helium.
|
388 |
Identification of LHC beam loss mechanism : a deterministic treatment of loss patternsMarsili, Aurélien 21 November 2012 (has links) (PDF)
CERN's Large Hadron Collider (LHC) is the largest machine ever built, with a total circumference of 26.7 km; and it is the most powerful accelerator ever, both in beam energy and beam intensity. The main magnets are superconducting, keeping the particles into two counter circulating beams, which collide in four interaction points. CERN and the LHC will be described in chap. 1. The superconducting magnets of the LHC have to be protected against particle losses. Depending on the number of lost particles, the coils of the magnets will become normal conducting and/or will be damaged. To avoid these events a beam loss monitoring (BLM) system was installed to measure the particle loss rates. If the predefined safe thresholds of loss rates are exceeded, the beams are directed out of the accelerator ring towards the beam dump. The detectors of the BLM system are mainly ionization chambers located outside of the cryostats. In total, about 3500 ionisation chambers are installed. Further challenges include the high dynamical range of losses (chamber currents ranging between 2 pA and 1 mA). The BLM system will be further described in chap. 2. The subject of this thesis is to study the loss patterns and find the origin of the losses in a deterministic way, by comparing measured losses to well understood loss scenarios. This is done through a case study: different techniques were used on a restrained set of loss scenarios, as a proof of concept of the possibility to extract information from a loss profile. Finding the origin of the losses should allow acting in response. A justification of the doctoral work will be given at the end of chap. 2. Then, this thesis will focus on the theoretical understanding and the implementation of the decomposition of a measured loss profile as a linear combination of the reference scenarios; and the evaluation of the error on the recomposition and its correctness. The principles of vector decomposition are developed in chap. 3. An ensemble of well controlled loss scenarios (such as vertical and horizontal blow-up of the beams or momentum offset during collimator loss maps) has been gathered, in order to allow the study and creation of reference vectors. To achieve the Vector Decomposition, linear algebra (matrix inversion) is used with the numeric algorithm for the Singular Value Decomposition. Additionally, a specific code for vector projection on a non-orthogonal basis of a hyperplane was developed. The implementation of the vector decomposition on the LHC data is described in chap. 4. After this, the use of the decomposition tools systematically on the time evolution of the losses will be described: first as a study of the variations second by second, then by comparison to a calculated default loss profile. The different ways to evaluate the variation are studied, and are presented in chap. 5. The next chapter (6) describes the gathering of decomposition results applied to beam losses of 2011. The vector decomposition is applied on every second of the ''stable beans'' periods, as a study of the spatial distribution of the loss. Several comparisons of the results given by the decompositions with measurements from other LHC instruments allowed different validations. Eventually, a global conclusion on the interest of the vector decomposition is given. Then, the extra chapter in Appendix A describes the code which was developed to access the BLM data, to represent them in a meaningful way, and to store them. This included connecting to different databases. The whole instrument uses ROOT objects to send SQL queries to the databases, as well as java API, and is coded in Python. A short glossary of the acronyms used here can be found at the end, before the bibliography.
|
389 |
Supersymmetry in the LHC era. Interplay between flavour physics, cosmology and collider physicsMahmoudi, Farvah 13 December 2012 (has links) (PDF)
Des informations sur la nouvelle physique peuvent être extraites de plusieurs secteurs indépendants et en particulier : les recherches directes du Higgs et de nouvelles particules aux collisionneurs, qui sont entrées dans une nouvelle ère avec le démarrage du LHC, les informations indirectes des données de physique des saveurs, en utilisant les résultats obtenus aux usines à B et récemment aussi au LHC, et enfin les informations indirectes sur la densité relique de mati ère noire et les recherches directes de mati ère noire, en particulier au vu des résultats récents des expériences XENON, CoGENT, CRESST, ... Combiner les informations des différents secteurs est en effet riche d'implications et permet de réduire l'espace des paramètres des scénarios de nouvelle physique. Nous avons démontré l'existence de telles synergies dans le contexte de la supersymétrie pour différents scénarios contraints ainsi que pour un scénario plus g én éral du MSSM (pMSSM).
|
390 |
Performance des jets et mesure de la section efficace de production de paires t tbar dans le canal tout hadronique auprès du détecteur ATLAS.Ghodbane, N. 22 November 2012 (has links) (PDF)
Dans ce mémoire, nous présentons nos travaux de recherche que nous avons menés au sein de la collaboration ATLAS au LHC dans la période 2005-2012. Ces travaux inclus la préparation de deux des sous-détecteurs composant le système de détecteurs internes, l'étude et l'optimisation des algorithmes de reconstruction des jets afin d'en améliorer les performances et enfin l'utilisation des données collectées à partir des collisions proton-proton pour une énergie de collision de 7 TeV dans le centre de masse pour en extraire la mesure de la section efficace de production du quark top dans le mode de désintégration tout-hadronique.
|
Page generated in 0.0297 seconds