• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 361
  • 55
  • 48
  • 31
  • 8
  • 4
  • 3
  • 3
  • 2
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • Tagged with
  • 570
  • 570
  • 201
  • 129
  • 111
  • 105
  • 91
  • 91
  • 88
  • 79
  • 71
  • 69
  • 63
  • 61
  • 55
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
511

Top of the tops: Combining searches for Dark Matter with top quarks at √s = 13 TeV with the ATLAS detector

Liberatore, Marianna 23 January 2023 (has links)
Trotz der zahlreichen astrophysikalisch und kosmologisch überzeugenden Beweise für Dunkle Materie bleibt ihre wahre Natur unbekannt. Eine Motivation für die Suche nach Dunkler Materie am Large Hadron Collider (LHC) und insbesondere mit dem ATLAS-Experiment ist die besonders vielversprechende Möglichkeit, dass Wechselwirkungen zwischen gewöhnlicher Materie und Dunkler Materie durch neue Spin-0-Teilchen vermittelt werden. Solche Teilchen würden das Standardmodell um einen potentiellen Dunklen Sektor erweitern, zu dem die Teilchen der Dunklen Materie gehören. ähnlich wie das Higgs-Boson interagieren diese neuen Mediatoren am stärksten mit den schwersten Teilchen über Kopplungen vom Yukawa-Typ, wodurch sie empfänglicher für die damit verbundene Produktion durch Quarks mit schwerem Flavour werden. Diese Dissertation präsentiert die Ergebnisse der statistischen Kombination von zwei Suchen, die auf die mit Dunkler Materie verbundene Produktion von einem Top-Quark-Paar oder einem einzelnen Top-Quark, jeweils zwei geladenen Leptonen im Endzustand, abzielen. Diese beiden Kanäle weisen komplementäre Eigenschaften auf, und eine Kombination kann die Empfindlichkeit gegenüber Dunkle-Materie-Signalen erheblich verbessern. Diese Kombination wird unter Verwendung von 139 fb-1 pp-Kollisionsdaten durchgeführt, die bei einer Schwerpunktsenergie von 13 TeV erzeugt und vom ATLAS-Detektor am LHC aufgezeichnet wurden. Die Ergebnisse der Kombination werden in Bezug auf vereinfachte Modelle für Dunklen Materie mit einem skalaren oder pseudoskalaren Spin-0-Mediator interpretiert. Die statistische Kombination weitet die bei einem Konfidenzniveau von 95% ausgeschlossenen Massen auf bis zu 350 GeV sowohl für skalare als auch für pseudoskalare Mediatoren aus. Die beobachteten Ausschlu{\ss}grenzen des Wirkungsquerschnitts werden für den ska\-laren (pseudoskalaren) Mediator um 20%(30%) gegenüber dem Besten des einzelnen Kanals verbessert. / Despite the multiple astrophysical and cosmological compelling evidences for Dark Matter, its true nature remains unknown. A motivation to dark matter searches at the Large Hadron Collider (LHC), and in particular in the ATLAS experiment, is the especially promising possibility that interactions between ordinary matter and dark matter are mediated by new spin-0 particles. Such particles would extend the Standard Model with a potential dark sector, to which dark matter particles belong. Similarly to the Higgs boson, these new mediators interact strongest with the heaviest particles via Yukawa-type couplings, making them more prone to associated production with heavy-flavour quarks. This thesis presents the results of the statistical combination of two searches targeting dark matter associated production with a top quark pair and a single top quark, targeting two charged leptons in the final state. These two channels present in fact complementary properties and a combination could enhance significantly the sensitivity to dark matter signals. This combination is carried out using 139 fb-1 of pp collision data produced at centre-of-mass of 13 TeV recorded by the ATLAS detector at the LHC. The results of the combination are interpreted in terms of simplified dark matter models with a spin-0 scalar or pseudoscalar mediator particle. The statistical combination extends the exclusion at 95 % Confidence Level (CL) of masses up to 350 GeV for both scalar and pseudoscalar mediators. The observed upper limits on the cross-section are improved up by 20 %(30 %) for scalar(pseudoscalar) mediator with respect to the best of the individual channel.
512

Search for axion-like particles through their effects on the transparency of the universe with the fermi large area telescope

Gallardo Romero, Galo 26 June 2020 (has links)
Axionartige Teilchen sind pseudoscalare Teilchen welche in Theorien jenseits des Standardmodells vorhergesagt werden. Falls ein axionartiges Teilchen innerhalb eines kosmischen magnetischen Felds gebildet wird, wird dieses nicht durch das Hintergrundlicht absorbiert. Daher kann es kosmische Distanzen überbrücken bevor es wieder in ein Photon zurück oszilliert. Dieser Effekt erhöht die Reichweite der Gammastrahlung im Universum. Im Rahmen dieser Dissertation werden Daten des Fermi Large Area Telescopes, aufgenommen über eine Zeitraum von sechs Jahren, systematisch analysiert. Hierbei wird nach axionartigen Teilchen mit Hilfe von Transparenzeffekten des Universums gesucht. In diesem Zusammenhang werden verschiedene Modelle des extragalaktischen Hintergrundlichts mit und ohne Berücksichtigung axionartiger Teilchen verglichen. Hierfür werden Likelihood-Funktionen für das höchst energetische Photon verschiedener entfernter Quelle kombiniert. Diese sind aktive galaktische Kerne mit einer Rotverschiebung z ≥0.1 des Second Catalog of Hard Fermi-LAT Sources. Unter den Annahmen einer intergalaktischen magnetischen Feldstärke von B = 1 nG und einer Kohärenzlänge von s = 1 Mpc wurde keine Veränderungen der Transparenz durch axionähnliche Teilchen nachgewiesen. Für eine Masse eines axionartigen Teilchens mit m≅ 3.0 neV wird eine Photonen-Axion Kopplungskonstante über 10(^11) GeV(^−1) ausgeschlossen. / Axion-like particles, pseudo-scalar particles that arise in theories beyond the Standard Model, mix with photons in the presence of magnetic fields. If an axion-like particle is produced within a cosmic magnetic field, it evades extragalactic background light absorption and thus it can survive cosmological distances until oscillating back into a photon. This leads to an increased transparency of the Universe to gamma rays. In the scope of this thesis, we search for transparency effects compatible with the existence of axion-like particles with six years of data from the Fermi Large Area Telescope. We derive and combine the likelihoods of the highest-energy photon events from a sample of hard distant sources, in order to compare models that include axion-like particles and models with only extragalactic background light. The sources are active galactic nuclei from the Second Catalog of Hard Fermi sources at redshift z≥0.1. For values of the intergalactic magnetic field strength B = 1 nG and coherence length s = 1 Mpc, we find no evidence for a modified transparency induced by axion-like particles and therefore we set upper limits. We exclude photon-axion coupling constants above 10(^11) GeV(^−1) for axion masses m≅ 3.0 neV.
513

Accélération du lentillage gravitationnel à plans multiples par apprentissage profond

Wilson, Charles 04 1900 (has links)
Le "modèle standard" actuel de la cosmologie est celui de ΛCDM, décrivant un Univers en expansion accélérée ainsi qu’une structure de matière sombre froide formée en halos, sur lesquels s’assemblent les galaxies. Malgré les nombreuses confirmations observationnelles de ses prédictions, il existe d’importantes tensions entre les mesures de la distribution de structure sombre aux petites échelles de l’Univers et ce qui serait attendu de ΛCDM. Cependant, ces halos légers de matière sombre, qui sont prédit d’abonder à travers le cosmos, n’hébergent pas de galaxies lumineuses et sont donc très difficiles à observer directement. Leur présence peut toutefois être détectée dans les lentilles gravitationnelles fortes de type galaxie-galaxie, un phénomène se produisant lorsque la lumière d’une galaxie d’arrière-plan est fortement déviée par le champ gravitationnel d’une galaxie d’avantplan, formant des images multiples et des arcs étendus. Les halos distribués en ligne de visée de tels systèmes, ainsi que ceux imbriqués dans la galaxie lentille, peuvent causer des perturbations gravitationnelles dans les images de galaxies lentillées. La détection de ces effets infimes dans des observations de lentilles gravitationnelles est faite par des méthodes statistiques Bayésiennes, qui nécéssitent des centaines de milliers de simulations de la contribution de ces perturbateurs à la déflexion de la lumière. Traditionnellement, la modélisation du lentillage par les halos en ligne de visée s’est faite avec le formalisme du lentillage à plans multiples, qui souffre d’une nature récursive peu efficace. De plus, il est prédit par le modèle ΛCDM que la majorité des systèmes de lentilles gravitationnelles comporteraient davantage de halos en ligne de visée que de sous-halos imbriqués dans la galaxie lentille, motivant une modélisation détaillée des effets de ligne de visée. Dans un contexte d’analyse Bayésienne, l’approche du lentillage à plans multiples représente une échelle de temps de plusieurs jours pour l’analyse d’un seul système. En considérant que des grands relevés du ciel comme ceux de l’Observatoire Vera Rubin et du télescope spatial Euclid sont projetés de découvrir des centaines de milliers de lentilles gravitationnelles, l’effort de contraindre la distribution de matière sombre aux petites échelles se voit confronté à ce qui pourrait être un insurmontable problème de temps de calcul. Dans ce mémoire, je présente le développement d’un nouveau formalisme de modélisation du lentillage gravitationnel par halos en ligne de visée accéléré par des réseaux de neurones, motivé par les lacunes du lentillage à plans multiples et l’importance scientifique de la modélisation de ces effets. Les architectures de ces réseaux, conçues dans le cadre de ce travail, sont basées sur le mécanisme d’attention, et peuvent être conditionnées sur des ensembles de modèles de halos en ligne de visée afin de produire les angles de déflexion leur étant associés. Ce formalisme offre la flexibilité requise pour remplacer celui du lentillage à plans multiples, laissant à l’usager la liberté de spécifier un modèle de lentille principale et étant compatible avec des grilles de pixels de taille quelconque. Notre formalisme permet d’accélérer la modélisation du lentillage de ligne de visée par presque deux ordres de grandeur lorsque comparé au lentillage à plans multiples, et promet d’atteindre une exactitude lui étant comparable dans des développements futurs. Il s’agit d’une contribution significative à l’étude de la matière sombre aux petites échelles, qui permettra soit de réconcilier ΛCDM et les observations, ou mènera à l’adoption d’un modèle cosmologique alternatif. / The current "standard model" of cosmology is that of ΛCDM, describing a Universe undergoing accelerated expansion with a structure of cold dark matter formed into halos, onto which are assembled galaxies. Despite the numerous observational confirmations of its predictions, there remains some important tensions between measures of the distribution of dark structure on small scales of the Universe and what would be expected from ΛCDM. However, these light dark matter halos, predicted to be adundant throughout the cosmos, are not hosts of luminous galaxies and are therefore very difficult to observe directly. Yet, their presence can still be detected in galaxy-galaxy type strong gravitational lenses, a phenomenon occuring when the light of a background galaxy is strongly deflected by the gravitational field of a foreground galaxy, forming multiple images and extended arcs. Halos distributed along the line-of-sight of such systems, as well as those nested within the lens galaxy, can introduce gravitational perturbations in images of lensed galaxies. The detection of such infinitesimal effects in strong lensing observations is made with methods relying on Bayesian statistics, which require hundreds of thousands of simulations of the contribution of these perturbers to the deflection of light. Traditionally, modeling the lensing from line-of-sight halos has been done with the multi-plane lensing framework, which suffers from its inefficient recursive nature. Morevoer, the ΛCDM model predicts that most gravitational lens systems would host a larger amount of line-of-sight halos than subhalos nested within the lens galaxy, motivating a detailed modeling of line-of-sight effects. In a Bayesian analysis context, the multi-plane lensing approach represents a timescale of multiple days for the analysis of a single system. Considering that large sky surveys such as those of the Vera Rubin Observatory and the Euclid space telescope are projected to discover hundreds of thousands of gravitational lenses, the effort of constraining the small-scale distribution of dark matter is confronted to what might seem like an insurmountable problem of computation time. In this thesis, I present the development of a new neural-network-accelerated framework for modeling the gravitational lensing by line-of-sight halos, motivated by the shortcomings of multiplane lensing and the scientific importance of modeling these effects. The architectures of these networks, conceived as part of this work, are based on the attention mechanism, and can be conditioned on sets of line-of-sight halo models in order to produce their associated deflection angles. This framework offers the flexibility required to replace that of multi-plane lensing, leaving up to the user the freedom to specify a main lens model and being compatible with pixel grids of any size. Our framework allows to accelerate the modeling of line-of-sight lensing by nearly two orders of magnitude relative to multi-plane lensing, and promises to reach a comparable accuracy in future developments. This constitutes a significative contribution to the study of dark matter on small scales, which will either lead to the reconciliation of ΛCDM and observations, or the adoption of an alternate cosmological model.
514

Semi-Supervised Learning for Semi-Visible Jets: A Search for Dark Matter Jets at the LHC with the ATLAS Detector

Busch, Elena Laura January 2024 (has links)
A search is presented for hadronic signatures of a strongly-coupled hidden dark sector, accessed via resonant production of a ?′ mediator. The analysis uses 139 fb-1 of proton-proton collision data collected by the ATLAS experiment during Run 2 of the LHC. The ?′ mediator decays to two dark quarks, which each hadronize and decay to showers containing both dark and Standard Model particles; these showers are termed “semi-visible” jets. The final state consists of missing energy aligned with one of the jets, a topology that is ignored by most dark matter searches. A supervised machine learning method is used to select these dark showers and reject the dominant background of mis-measured multijet events. A complementary semi-supervised anomaly detection approach introduces broad sensitivity to a variety of strongly coupled dark matter models. A resonance search is performed by fitting the transverse mass spectrum with a polynomial background estimation function. Results are presented as limits on the effective cross section of the Z', parameterized by the fraction of invisible particles in the decay and the Z' mass. No structure in the transverse mass spectrum compatible with the signal hypothesis is observed. Z' mediator masses from ranging from 2.0 TeV to 3.5 TeV are excluded at the 95% confidence level.
515

Prospects for Galactic dark matter searches with the Cherenkov Telescope Array (CTA)

Hütten, Moritz 05 May 2017 (has links)
Die vorliegende Arbeit beschreibt einen semi-analytischen Ansatz zur Modellierung der Dichteverteilung von DM im Galaktischen Halo. Aus den verschiedenen Substrukturmodellen wird die γ-Strahlungsintensität, welche die Erde erreicht, berechnet. Eine Spannbreite plausibler γ-Strahlungsintensitäten aufgrund der Paarvernichtung Galaktischer DM wird vorgeschlagen, welche die Vorhersagen verschiedener früherer Studien umfasst, und es werden die durchschnittlichen Massen, Abstände und ausgedehnten Strahlungsprofile der γ-strahlungsintensivsten DM-Verdichtungen berechnet. Schließlich werden die DM-Modelle für eine umfassende Berechnung der Nachweismöglichkeit Galaktischer Substrukturen mit CTA verwendet. Die instrumentelle Sensitivität zum Nachweis der γ-strahlungsintensivsten DM-Substruktur wird für eine mit CTA geplanten großflächigen Himmelsdurchmusterung außerhalb der Galaktischen Ebene berechnet. Die Berechnung wird mit CTA Analyse- Software und einer Methode durchgeführt, welche auf einer Likelihood beruht. Eine alternative, ebenfalls Likelihood-basierte Analysemethode wird entwickelt, mit welcher DM-Substrukturen als äumliche Anisotropien im Multipolspektrum des Datensatzes einer Himmelsdurchmusterung nachgewiesen werden können. Die Analysen ergeben, dass eine Himmelsdurchmusterung mit CTA und eine anschließende Suche nach γ-Strahlung von DM-Substrukturen Wirkungsquerschnitte für eine Paarvernichtung in der Größenordnung von (σv) > 1 × 10−24 cm3 s−1 für eine DM-Teilchenmasse von mχ ∼ 500 GeV auf einem Vertrauensniveau von 95% ausschließen kann. Diese Sensitivität ist vergleichbar mit Langzeitbeobachtungen einzelner Zwerggalaxien mit CTA. Eine modellunabhängige Analyse ergibt, dass eine Himmelsdurchmusterung mit CTA Anisotropien im diffusen γ-Strahlungshintergrund oberhalb von 100 GeV für relative Schwankungen von CPF > 10−2 nachweisen kann. / In the current understanding of structure formation in the Universe, the Milky Way is embedded in a clumpy halo of dark matter (DM). Regions of high DM density are expected to emit enhanced γ-radiation from the DM relic annihilation. This γ-radiation can possibly be detected by γ-ray observatories on Earth, like the forthcoming Cherenkov Telescope Array (CTA). This dissertation presents a semi-analytical density modeling of the subclustered Milky Way DM halo, and the γ-ray intensity at Earth from DM annihilation in Galactic subclumps is calculated for various substructure models. It is shown that the modeling approach is able to reproduce the γ-ray intensities obtained from extensive dynamical DM simulations, and that it is consistent with the DM properties derived from optical observations of dwarf spheroidal galaxies. A systematic confidence margin of plausible γ-ray intensities from Galactic DM annihilation is estimated, encompassing a variety of previous findings. The average distances, masses, and extended emission profiles of the γ-ray-brightest DM clumps are calculated. The DM substructure models are then used to draw reliable predictions for detecting Galactic DM density clumps with CTA, using the most recent benchmark calculations for the performance of the instrument. A Likelihood-based calculation with CTA analysis software is applied to find the instrumental sensitivity to detect the γ-ray-brightest DM clump in the projected CTA extragalactic survey. An alternative Likelihood-based analysis method is developed, to detect DM substructures as anisotropies in the angular power spectrum of the extragalactic survey data. The analyses predict that the CTA extragalactic survey will be able to probe annihilation cross sections of ⟨σv⟩ > 1 × 10−24 cm3 s−1 at the 95% confidence level for a DM particle mass of mχ ∼ 500 GeV from DM annihilation in substructures. This sensitivity is compatible with long-term observations of single dwarf spheroidal galaxies with CTA. Independent of a particular source model, it is found that the CTA extragalactic survey will be able to detect anisotropies in the diffuse γ-ray background above 100 GeV at a relative amplitude of CP_F > 10−2.
516

Cosmologia usando aglomerados de galáxias no Dark Energy Survey / Cosmology with Galaxy Clusters in the Dark Energy Survey

Silva, Michel Aguena da 03 August 2017 (has links)
Aglomerados de galáxias são as maiores estruturas no Universo. Sua distribuição mapeia os halos de matéria escura formados nos potenciais profundos do campo de matéria escura. Consequentemente, a abundância de aglomerados é altamente sensível a expansão do Universo, assim como ao crescimento das perturbações de matéria escura, constituindo uma poderosa ferramenta para fins cosmológicos. Na era atual de grandes levantamentos observacionais que produzem uma quantidade gigantesca de dados, as propriedades estatísticas dos objetos observados (galáxias, aglomerados, supernovas, quasares, etc) podem ser usadas para extrair informações cosmológicas. Para isso, é necessária o estudo da formação de halos de matéria escura, da detecção dos halos e aglomerados, das ferramentas estatísticas usadas para o vínculos de parâmetros, e finalmente, dos efeitos da detecções ópticas. No contexto da formulação da predição teórica da contagem de halos, foi analisada a influência de cada parâmetro cosmológico na abundância dos halos, a importância do uso da covariância dos halos, e a eficácia da utilização dos halos para vincular cosmologia. Também foi analisado em detalhes os intervalos de redshift e o uso de conhecimento prévio dos parâmetros ({\\it priors}). A predição teórica foi testada um uma simulação de matéria escura, onde a cosmologia era conhecida e os halos de matéria escura já haviam sido detectados. Nessa análise, foi atestado que é possível obter bons vínculos cosmológicos para alguns parâmetros (Omega_m,w,sigma_8,n_s), enquanto outros parâmetros (h,Omega_b) necessitavam de conhecimento prévio de outros testes cosmológicos. Na seção dos métodos estatísticos, foram discutidos os conceitos de {\\it likelihood}, {\\it priors} e {\\it posterior distribution}. O formalismo da Matriz de Fisher, bem como sua aplicação em aglomerados de galáxias, foi apresentado e usado para a realização de predições dos vínculos em levantamentos atuais e futuros. Para a análise de dados, foram apresentados métodos de Cadeias de Markov de Monte Carlo (MCMC), que diferentemente da Matriz de Fisher não assumem Gaussianidade entre os parâmetros vinculados, porém possuem um custo computacional muito mais alto. Os efeitos observacionais também foram estudados em detalhes. Usando uma abordagem com a Matriz de Fisher, os efeitos de completeza e pureza foram extensivamente explorados. Como resultado, foi determinado em quais casos é vantajoso incluir uma modelagem adicional para que o limite mínimo de massa possa ser diminuído. Um dos principais resultados foi o fato que a inclusão dos efeitos de completeza e pureza na modelagem não degradam os vínculos de energia escura, se alguns outros efeitos já estão sendo incluídos. Também foi verificados que o uso de priors nos parâmetros não cosmológicos só afetam os vínculos de energia escura se forem melhores que 1\\%. O cluster finder(código para detecção de aglomerados) WaZp foi usado na simulação, produzindo um catálogo de aglomerados. Comparando-se esse catálogo com os halos de matéria escura da simulação, foi possível investigar e medir os efeitos observacionais. A partir dessas medidas, pôde-se incluir correções para a predição da abundância de aglomerados, que resultou em boa concordância com os aglomerados detectados. Os resultados a as ferramentas desenvolvidos ao longo desta tese podem fornecer um a estrutura para a análise de aglomerados com fins cosmológicos. Durante esse trabalho, diversos códigos foram desenvolvidos, dentre eles, estão um código eficiente para computar a predição teórica da abundância e covariância de halos de matéria escura, um código para estimar a abundância e covariância dos aglomerados de galáxias incluindo os efeitos observacionais, e um código para comparar diferentes catálogos de halos e aglomerados. Esse último foi integrado ao portal científico do Laboratório Interinstitucional de e-Astronomia (LIneA) e está sendo usado para avaliar a qualidade de catálogos de aglomerados produzidos pela colaboração do Dark Energy Survey (DES), assim como também será usado em levantamentos futuros. / Abstract Galaxy clusters are the largest bound structures of the Universe. Their distribution maps the dark matter halos formed in the deep potential wells of the dark matter field. As a result, the abundance of galaxy clusters is highly sensitive to the expansion of the universe as well as the growth of dark matter perturbations, representing a powerful tool for cosmological purposes. In the current era of large scale surveys with enormous volumes of data, the statistical quantities from the objects surveyed (galaxies, clusters, supernovae, quasars, etc) can be used to extract cosmological information. The main goal of this thesis is to explore the potential use of galaxy clusters for constraining cosmology. To that end, we study the halo formation theory, the detection of halos and clusters, the statistical tools required to quarry cosmological information from detected clusters and finally the effects of optical detection. In the composition of the theoretical prediction for the halo number counts, we analyze how each cosmological parameter of interest affects the halo abundance, the importance of the use of the halo covariance, and the effectiveness of halos on cosmological constraints. The redshift range and the use of prior knowledge of parameters are also investigated in detail. The theoretical prediction is tested on a dark matter simulation, where the cosmology is known and a dark matter halo catalog is available. In the analysis of the simulation we find that it is possible to obtain good constraints for some parameters such as (Omega_m,w,sigma_8,n_s) while other parameters (h,Omega_b) require external priors from different cosmological probes. In the statistical methods, we discuss the concept of likelihood, priors and the posterior distribution. The Fisher Matrix formalism and its application on galaxy clusters is presented, and used for making forecasts of ongoing and future surveys. For the real analysis of data we introduce Monte Carlo Markov Chain (MCMC) methods, which do not assume Gaussianity of the parameters distribution, but have a much higher computational cost relative to the Fisher Matrix. The observational effects are studied in detail. Using the Fisher Matrix approach, we carefully explore the effects of completeness and purity. We find in which cases it is worth to include extra parameters in order to lower the mass threshold. An interesting finding is the fact that including completeness and purity parameters along with cosmological parameters does not degrade dark energy constraints if other observational effects are already being considered. The use of priors on nuisance parameters does not seem to affect the dark energy constraints, unless these priors are better than 1\\%.The WaZp cluster finder was run on a cosmological simulation, producing a cluster catalog. Comparing the detected galaxy clusters to the dark matter halos, the observational effects were investigated and measured. Using these measurements, we were able to include corrections for the prediction of cluster counts, resulting in a good agreement with the detected cluster abundance. The results and tools developed in this thesis can provide a framework for the analysis of galaxy clusters for cosmological purposes. Several codes were created and tested along this work, among them are an efficient code to compute theoretical predictions of halo abundance and covariance, a code to estimate the abundance and covariance of galaxy clusters including multiple observational effects and a pipeline to match and compare halo/cluster catalogs. This pipeline has been integrated to the Science Portal of the Laboratório Interinstitucional de e-Astronomia (LIneA) and is being used to automatically assess the quality of cluster catalogs produced by the Dark Energy Survey (DES) collaboration and will be used in other future surveys.
517

Cosmologia usando aglomerados de galáxias no Dark Energy Survey / Cosmology with Galaxy Clusters in the Dark Energy Survey

Michel Aguena da Silva 03 August 2017 (has links)
Aglomerados de galáxias são as maiores estruturas no Universo. Sua distribuição mapeia os halos de matéria escura formados nos potenciais profundos do campo de matéria escura. Consequentemente, a abundância de aglomerados é altamente sensível a expansão do Universo, assim como ao crescimento das perturbações de matéria escura, constituindo uma poderosa ferramenta para fins cosmológicos. Na era atual de grandes levantamentos observacionais que produzem uma quantidade gigantesca de dados, as propriedades estatísticas dos objetos observados (galáxias, aglomerados, supernovas, quasares, etc) podem ser usadas para extrair informações cosmológicas. Para isso, é necessária o estudo da formação de halos de matéria escura, da detecção dos halos e aglomerados, das ferramentas estatísticas usadas para o vínculos de parâmetros, e finalmente, dos efeitos da detecções ópticas. No contexto da formulação da predição teórica da contagem de halos, foi analisada a influência de cada parâmetro cosmológico na abundância dos halos, a importância do uso da covariância dos halos, e a eficácia da utilização dos halos para vincular cosmologia. Também foi analisado em detalhes os intervalos de redshift e o uso de conhecimento prévio dos parâmetros ({\\it priors}). A predição teórica foi testada um uma simulação de matéria escura, onde a cosmologia era conhecida e os halos de matéria escura já haviam sido detectados. Nessa análise, foi atestado que é possível obter bons vínculos cosmológicos para alguns parâmetros (Omega_m,w,sigma_8,n_s), enquanto outros parâmetros (h,Omega_b) necessitavam de conhecimento prévio de outros testes cosmológicos. Na seção dos métodos estatísticos, foram discutidos os conceitos de {\\it likelihood}, {\\it priors} e {\\it posterior distribution}. O formalismo da Matriz de Fisher, bem como sua aplicação em aglomerados de galáxias, foi apresentado e usado para a realização de predições dos vínculos em levantamentos atuais e futuros. Para a análise de dados, foram apresentados métodos de Cadeias de Markov de Monte Carlo (MCMC), que diferentemente da Matriz de Fisher não assumem Gaussianidade entre os parâmetros vinculados, porém possuem um custo computacional muito mais alto. Os efeitos observacionais também foram estudados em detalhes. Usando uma abordagem com a Matriz de Fisher, os efeitos de completeza e pureza foram extensivamente explorados. Como resultado, foi determinado em quais casos é vantajoso incluir uma modelagem adicional para que o limite mínimo de massa possa ser diminuído. Um dos principais resultados foi o fato que a inclusão dos efeitos de completeza e pureza na modelagem não degradam os vínculos de energia escura, se alguns outros efeitos já estão sendo incluídos. Também foi verificados que o uso de priors nos parâmetros não cosmológicos só afetam os vínculos de energia escura se forem melhores que 1\\%. O cluster finder(código para detecção de aglomerados) WaZp foi usado na simulação, produzindo um catálogo de aglomerados. Comparando-se esse catálogo com os halos de matéria escura da simulação, foi possível investigar e medir os efeitos observacionais. A partir dessas medidas, pôde-se incluir correções para a predição da abundância de aglomerados, que resultou em boa concordância com os aglomerados detectados. Os resultados a as ferramentas desenvolvidos ao longo desta tese podem fornecer um a estrutura para a análise de aglomerados com fins cosmológicos. Durante esse trabalho, diversos códigos foram desenvolvidos, dentre eles, estão um código eficiente para computar a predição teórica da abundância e covariância de halos de matéria escura, um código para estimar a abundância e covariância dos aglomerados de galáxias incluindo os efeitos observacionais, e um código para comparar diferentes catálogos de halos e aglomerados. Esse último foi integrado ao portal científico do Laboratório Interinstitucional de e-Astronomia (LIneA) e está sendo usado para avaliar a qualidade de catálogos de aglomerados produzidos pela colaboração do Dark Energy Survey (DES), assim como também será usado em levantamentos futuros. / Abstract Galaxy clusters are the largest bound structures of the Universe. Their distribution maps the dark matter halos formed in the deep potential wells of the dark matter field. As a result, the abundance of galaxy clusters is highly sensitive to the expansion of the universe as well as the growth of dark matter perturbations, representing a powerful tool for cosmological purposes. In the current era of large scale surveys with enormous volumes of data, the statistical quantities from the objects surveyed (galaxies, clusters, supernovae, quasars, etc) can be used to extract cosmological information. The main goal of this thesis is to explore the potential use of galaxy clusters for constraining cosmology. To that end, we study the halo formation theory, the detection of halos and clusters, the statistical tools required to quarry cosmological information from detected clusters and finally the effects of optical detection. In the composition of the theoretical prediction for the halo number counts, we analyze how each cosmological parameter of interest affects the halo abundance, the importance of the use of the halo covariance, and the effectiveness of halos on cosmological constraints. The redshift range and the use of prior knowledge of parameters are also investigated in detail. The theoretical prediction is tested on a dark matter simulation, where the cosmology is known and a dark matter halo catalog is available. In the analysis of the simulation we find that it is possible to obtain good constraints for some parameters such as (Omega_m,w,sigma_8,n_s) while other parameters (h,Omega_b) require external priors from different cosmological probes. In the statistical methods, we discuss the concept of likelihood, priors and the posterior distribution. The Fisher Matrix formalism and its application on galaxy clusters is presented, and used for making forecasts of ongoing and future surveys. For the real analysis of data we introduce Monte Carlo Markov Chain (MCMC) methods, which do not assume Gaussianity of the parameters distribution, but have a much higher computational cost relative to the Fisher Matrix. The observational effects are studied in detail. Using the Fisher Matrix approach, we carefully explore the effects of completeness and purity. We find in which cases it is worth to include extra parameters in order to lower the mass threshold. An interesting finding is the fact that including completeness and purity parameters along with cosmological parameters does not degrade dark energy constraints if other observational effects are already being considered. The use of priors on nuisance parameters does not seem to affect the dark energy constraints, unless these priors are better than 1\\%.The WaZp cluster finder was run on a cosmological simulation, producing a cluster catalog. Comparing the detected galaxy clusters to the dark matter halos, the observational effects were investigated and measured. Using these measurements, we were able to include corrections for the prediction of cluster counts, resulting in a good agreement with the detected cluster abundance. The results and tools developed in this thesis can provide a framework for the analysis of galaxy clusters for cosmological purposes. Several codes were created and tested along this work, among them are an efficient code to compute theoretical predictions of halo abundance and covariance, a code to estimate the abundance and covariance of galaxy clusters including multiple observational effects and a pipeline to match and compare halo/cluster catalogs. This pipeline has been integrated to the Science Portal of the Laboratório Interinstitucional de e-Astronomia (LIneA) and is being used to automatically assess the quality of cluster catalogs produced by the Dark Energy Survey (DES) collaboration and will be used in other future surveys.
518

Astrophysical and Collider Signatures of Extra Dimensions

Melbéus, Henrik January 2010 (has links)
<p>In recent years, there has been a large interest in the subject of extra dimensions in particle physics. In particular, a number of models have been suggested which provide solutions to some of the problems with the current Standard Model of particle physics, and which could be tested in the next generation of high-energy experiments. Among the most important of these models are the large extra dimensions model by Arkani-Hamed, Dimopoulos, and Dvali, the universal extra dimensions model, and models allowing right-handed neutrinos to propagate in the extra dimensions. In this thesis, we study phenomenological aspects of these three models, or simple modifications of them.</p><p> </p><p>The Arkani-Hamed-Dimopoulos-Dvali model attempts to solve the gauge hierarchy problem through a volume suppression of Newton's gravitational constant, lowering the fundamental Planck scale down to the electroweak scale. However, this solution is unsatisfactory in the sense that it introduces a new scale through the radius of the extra dimensions, which is unnaturally large compared to the electroweak scale. It has been suggested that a similar model, with a hyperbolic internal space, could provide a more satisfactory solution to the problem, and we consider the hadron collider phenomenology of such a model.</p><p> </p><p>One of the main features of the universal extra dimensions model is the existence of a potential dark matter candidate, the lightest Kaluza-Klein particle. In the so-called minimal universal extra dimensions model, the identity of this particle is well defined, but in more general models, it could change. We consider the indirect neutrino detection signals for a number of different such dark matter candidates, in a five- as well as a six-dimensional model.</p><p> </p><p>Finally, right-handed neutrinos propagating in extra dimensions could provide an alternative scenario to the seesaw mechanism for generating small masses for the left-handed neutrinos. Since extra-dimensional models are non-renormalizable, the Kaluza-Klein tower is expected to be cut off at some high-energy scale. We study a model where a Majorana neutrino at this cutoff scale is responsible for the generation of the light neutrino masses, while the lower modes of the tower could possibly be observed in the Large Hadron Collider. We investigate the bounds on the model from non-unitarity effects, as well as collider signatures of the model.</p>
519

Discrimination d'événements par analyse des signaux enregistrés par le projet PICASSO

Archambault, Simon 07 1900 (has links)
La matière sombre est un mystère dans le domaine de l’astrophysique depuis déjà plusieurs années. De nombreuses observations montrent que jusqu’à 85 % de la masse gravitationnelle totale de l’univers serait composée de cette matière de nature inconnue. Une théorie expliquant cette masse manquante considérerait les WIMPs (Weakly Interacting Massive Particles), particules stables, non chargées, prédites par des extensions du modèle standard, comme candidats. Le projet PICASSO (Projet d’Identification des CAndidats Supersymétriques à la matière Sombre) est une expérience qui tente de détecter directement le WIMP. Le projet utilise des détecteurs à gouttelettes de fréon (C4F10) surchauffées. La collision entre un WIMP et le noyau de fluor crée un recul nucléaire qui cause à son tour une transition de phase de la gouttelette liquide à une bulle gazeuse. Le bruit de ce phénomène est alors capté par des senseurs piézoélectriques montés sur les parois des détecteurs. Le WIMP n’est cependant pas la seule particule pouvant causer une telle transition de phase. D’autres particules environnantes peuvent former des bulles, telles les particules alpha où même des rayons gamma . Le système d’acquisition de données (DAQ) est aussi en proie à du bruit électronique qui peut être enregistré, ainsi que sensible à du bruit acoustique extérieur au détecteur. Finalement, des fractures dans le polymère qui tient les gouttelettes en place peut également causer des transitions de phase spontanées. Il faut donc minimiser l’impact de tous ces différents bruit de fond. La pureté du matériel utilisé dans la fabrication des détecteurs devient alors très importante. On fait aussi appel à des méthodes qui impliquent l’utilisation de variables de discrimination développées dans le but d’améliorer les limites d’exclusion de détection du WIMP. / Dark matter has been a mystery for astrophysicists for years now. Numerous observations have shown that up to 85 % of the gravitation mass of the universe is made of this unknown type of matter. One of the theories explaining this missing mass problem considers WIMPs (Weakly Interacting Massive Particles), neutral stable particles predicted by extensions of the standard model, as possible candidates. The PICASSO experiment (Project In Canada to Search for Supersymetric Objects) tries to detect this particle directly. The technique uses superheated droplet detectors, with freon (C4F10) as the active medium. When a WIMP hits the fluorine nucleus, it creates a nuclear recoil, which in turn triggers a phase transition from a liquid droplet to a gaseous bubble. The acoustic noise of this event is then recorded by piezoelectric transducers mounted on the walls of the detector. There are however other particles than the WIMPs that can trigger this phase transition. Alpha particles, or even gamma rays can create bubbles. The Data Acquisition System (DAQ) is also subject to electronic noise that can be picked up, and to acoustic noise coming from an exterior source. Fractures in the polymer holding the droplets in place can also trigger spontaneous phase transitions. There is therefore a need to minimize the impact of these background noises. The level of purity of the ingredients used in detector fabrication then becomes very important. Digital processing methods are also used to develop discrimination variables that improve the limits of detection of the WIMP.
520

Analyse des données et étude systématique de la réponse des détecteurs dans le cadre du projet PICASSO

Giroux, Guillaume January 2008 (has links)
Mémoire numérisé par la Division de la gestion de documents et des archives de l'Université de Montréal.

Page generated in 0.1776 seconds