• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 64
  • 6
  • 5
  • 4
  • 4
  • 2
  • 1
  • 1
  • Tagged with
  • 107
  • 107
  • 41
  • 33
  • 26
  • 26
  • 25
  • 24
  • 24
  • 23
  • 23
  • 21
  • 17
  • 17
  • 12
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
91

Quasars and Low Surface Brightness Galaxies as Probes of Dark Matter / Kvasarer och ytljussvaga galaxer som redskap för att studera den mörka materian

Zackrisson, Erik January 2005 (has links)
Most of the matter in the Universe appears to be in some form which does not emit or absorb light. While evidence for the existence of this dark matter has accumulated over the last seventy years, its nature remains elusive. In this thesis, quasars and low surface brightness galaxies (LSBGs) are used to investigate the properties of the dark matter. Quasars are extremely bright light sources which can be seen over vast distances. These cosmic beacons may be used to constrain dark matter in the form of low-mass, compact objects along the line of sight, as such objects are expected to induce brightness fluctuations in quasars through gravitational microlensing effects. Using a numerical microlensing model, we demonstrate that the uncertainty in the typical size of the optical continuum-emitting region in quasars represents the main obstacle in this procedure. We also show that, contrary to claims in the literature, microlensing fails to explain the observed long-term optical variability of quasars. Here, quasar distances are inferred from their redshifts, which are assumed to stem from the expansion of the Universe. Some astronomers do however defend the view that quasar redshifts could have a different origin. A number of potential methods for falsifying claims of such non-cosmological redshifts are proposed. As the ratio of dark to luminous matter is known to be unusually high in LSBGs, these objects have become the prime targets for probing dark matter halos around galaxies. Here, we use spectral evolutionary models to constrain the properties of the stellar populations in a class of unusually blue LSBGs. Using rotation curve data obtained at the ESO Very Large Telescope, we also investigate the density profiles of their dark halos. We find our measurements to be inconsistent with the predictions of the currently favoured cold dark matter scenario.
92

Modified Gravity and the Phantom of Dark Matter

Brownstein, Joel Richard January 2009 (has links)
Astrophysical data analysis of the weak-field predictions support the claim that modified gravity (MOG) theories provide a self-consistent, scale-invariant, universal description of galaxy rotation curves, without the need of non-baryonic dark matter. Comparison to the predictions of Milgrom's modified dynamics (MOND) provide a best-fit and experimentally determined universal value of the MOND acceleration parameter. The predictions of the modified gravity theories are compared to the predictions of cold non-baryonic dark matter (CDM), including a constant density core-modified fitting formula, which produces excellent fits to galaxy rotation curves including the low surface brightness and dwarf galaxies. Upon analysing the mass profiles of clusters of galaxies inferred from X-ray luminosity measurements, from the smallest nearby clusters to the largest of the clusters of galaxies, it is shown that while MOG provides consistent fits, MOND does not fit the observed shape of cluster mass profiles for any value of the MOND acceleration parameter. Comparison to the predictions of CDM confirm that whereas the Navarro-Frenk-White (NFW) fitting formula does not fit the observed shape of galaxy cluster mass profiles, the core-modified dark matter fitting formula provides excellent best-fits, supporting the hypothesis that baryons are dynamically important in the distribution of dark matter halos.
93

Modified Gravity and the Phantom of Dark Matter

Brownstein, Joel Richard January 2009 (has links)
Astrophysical data analysis of the weak-field predictions support the claim that modified gravity (MOG) theories provide a self-consistent, scale-invariant, universal description of galaxy rotation curves, without the need of non-baryonic dark matter. Comparison to the predictions of Milgrom's modified dynamics (MOND) provide a best-fit and experimentally determined universal value of the MOND acceleration parameter. The predictions of the modified gravity theories are compared to the predictions of cold non-baryonic dark matter (CDM), including a constant density core-modified fitting formula, which produces excellent fits to galaxy rotation curves including the low surface brightness and dwarf galaxies. Upon analysing the mass profiles of clusters of galaxies inferred from X-ray luminosity measurements, from the smallest nearby clusters to the largest of the clusters of galaxies, it is shown that while MOG provides consistent fits, MOND does not fit the observed shape of cluster mass profiles for any value of the MOND acceleration parameter. Comparison to the predictions of CDM confirm that whereas the Navarro-Frenk-White (NFW) fitting formula does not fit the observed shape of galaxy cluster mass profiles, the core-modified dark matter fitting formula provides excellent best-fits, supporting the hypothesis that baryons are dynamically important in the distribution of dark matter halos.
94

Lorentz-violating dark matter

Mondragon, Antonio Richard 15 May 2009 (has links)
Observations from the 1930s until the present have established the existence of dark matter with an abundance that is much larger than that of luminous matter. Because none of the known particles of nature have the correct properties to be identified as the dark matter, various exotic candidates have been proposed. The neutralino of supersymmetric theories is the most promising example. Such cold dark matter candidates, however, lead to a conflict between the standard simulations of the evolution of cosmic structure and observations. Simulations predict excessive structure formation on small scales, including density cusps at the centers of galaxies, that is not observed. This conflict still persists in early 2007, and it has not yet been convincingly resolved by attempted explanations that invoke astrophysical phenomena, which would destroy or broaden all small scale structure. We have investigated another candidate that is perhaps more exotic: Lorentz-violating dark matter, which was originally motivated by an unconventional fundamental theory, but which in this dissertation is defined as matter which has a nonzero minimum velocity. Furthermore, the present investigation evolved into the broader goal of exploring the properties of Lorentz-violating matter and the astrophysical consequences – a subject which to our knowledge has not been previously studied. Our preliminary investigations indicated that this form of matter might have less tendency to form small-scale structure. These preliminary calculations certainly established that Lorentz-violating matter which always moves at an appreciable fraction of the speed of light will bind less strongly. However, the much more thorough set of studies reported here lead to the conclusion that, although the binding energy is reduced, the small-scale structure problem is not solved by Lorentz-violating dark matter. On the other hand, when we compare the predictions of Lorentz-violating dynamics with those of classical special relativity and general relativity, we find that differences might be observable in the orbital motions of galaxies in a cluster. For example, galaxies – which are composed almost entirely of dark matter – observed to have enlarged orbits about the cluster center of mass may be an indication of Lorentz violation.
95

Propriedades dinâmicas da matéria escura / Dynamical properties of the dark matter

Leandro José Beraldo e Silva 05 February 2015 (has links)
Esta tese tem como objetivo o estudo de aspectos dinâmicos e estatísticos da matéria escura em distribuições esféricas de massa. O fato de suas partículas constituintes interagirem gravitacionalmente mas não eletromagneticamente, e portanto sua evolução ser regida por interações de longo alcance, traz algumas complicações teóricas na descrição de suas propriedades nos termos da mecânica estatística, dificuldades compartilhadas com sistemas auto-gravitantes em geral. Para melhor compreender essas propriedades, estudamos as distribuições de matéria escura em três abordagens diferentes. Na primeira, utilizamos dados observacionais, utilizando lentes gravitacionais, em aglomerados de galáxias para comparar a performance de alguns modelos propostos para o perfil de densidade da matéria escura. Dividimos estes modelos em fenomenológicos ou teóricos. Dos primeiros, todos são capazes de descrever os dados observacionais com performance comparável. Entre os modelos teóricos estudados, o modelo chamado DARKexp descreve os dados tão bem quanto os primeiros. Numa segunda abordagem, utilizamos dados de simulações numéricas para testar uma função proposta para a distribuição de velocidades das partículas. Esta função inclui a anisotropia no campo de velocidades na chamada distribuição q-gaussiana. Comparamos a performance desta função com a da função gaussiana e concluímos que a primeira representa uma melhor descrição dos dados, mesmo levando em conta a introdução de um parâmetro extra, apesar de ainda apresentar algumas discrepâncias, especialmente nas regiões internas dos halos. Por fim, discutimos a possível relevância do conceito de indistinguibilidade na determinação dos estados de equilíbrio de sistemas auto-gravitantes em geral, propondo uma associação deste conceito com o nível de mistura do sistema. Implementamos esta associação numa análise combinatória e estudamos as conseqüências para a determinação da função distribuição e do perfil de densidades. Esta associação também levanta algumas dúvidas sobre a validade da equação de Vlasov durante o processo de relaxação violenta. / This thesis aims to study the dynamic and statistical aspects of dark matter in spherical distributions. The fact that their constituent particles interact gravitationally but not electromagnetically, and therefore its evolution is governed by long-range interactions, brings some theoretical complications in their description in terms of the statistical mechanics, difficulties shared with self-gravitating systems in general. To better understand these properties, we studied the distributions of dark matter in three different approaches. First, we used observational data, using gravitational lensing in galaxy clusters to compare the performance of some proposed models for the dark matter density profile. We divide these models in phenomenological or theoretical. All of the formers are able to describe the observational data with comparable performance. Among the theoretical models studied, the model called DARKexp describes the data as well as the formers. In a second approach, we use numerical simulation data to test a proposed function for the velocity distribution. This function includes the velocity anisotropy into the so called q-Gaussian distribution. We compared the performance of this function with the Gaussian function and concluded that the first is a better description of the data, even taking into account the introduction of an extra parameter, although still presenting some discrepancies, especially in the inner regions of the halo. Finally, we discuss the relevance of the concept of indistinguishability in determining the states of equilibrium of self-gravitating systems in general, suggesting an association of this concept with the mixing level of the system. We implement this association in a combinatorial analysis and study the consequences for the determination of the distribution function and the density profile. This association also raises some questions about the validity of the Vlasov equation during the process of violent relaxation.
96

Les sources responsables de la réionisation vues par MUSE / Responsible sources for the reionization seen by MUSE

Bina, David 12 December 2016 (has links)
Durant les deux dernières décennies, de nombreux efforts ont été apportés pour comprendre le processus de formation des structures de l'Univers jeune. Les avancées dans les technologies observationnelles atteintes aujourd'hui permettent d'observer des galaxies de plus en plus loin, y compris celles responsables de la réionisation cosmique qui a eu lors du premier milliard d'années de l'Univers. L'objectif principal de cette thèse a été de poser des contraintes sur la nature et l'abondance des sources responsables de la réionisation cosmique. Plus précisément, l'étude s'est portée sur les galaxies qui forment des étoiles et qui ont une émission Lyman-alpha (LAE) entre z ~ 3 et 6.7. Il est à noter que cette thèse a été réalisée au sein du consortium MUSE, tout nouvel instrument installé au VLT en janvier 2014 dont nous avons exploité les données du GTO. Ce travail de thèse a permis de confirmer la puissance inégalée de MUSE au niveau de la détection et de l'étude de sources extragalactiques faibles sans aucune présélection. Nous avons observé quatre amas-lentilles dont l'amplification de la lumière permet la détection de sources à faible luminosité, au prix d'une diminution du volume d'Univers observé. Nous nous sommes d'abord focalisés sur l'étude de l'amas de galaxies Abell 1689 afin de structurer une méthodologie applicable aux autres amas. En comparant la densité volumique des LAEs détectés aux différentes fonctions de luminosité (FdL) de la littérature, nous sommes arrivés à la conclusion suivante : la pente de la loi de puissance que décrit la fonction de Schechter pour la partie la plus faible est plus petite que alpha <= -1.5, ce qui signifie que le nombre de LAEs croît extrêment vite vers les faibles luminosités. Nous avons ensuite appliqué cette méthode aux autres amas de notre échantillon observés par MUSE. Les LAEs identifiés et mesurés dans ces amas sont typiquement dix fois plus faibles que ceux observés dans les champs vides (39 < log(Lya) < 42.5). Environ un tiers de ces LAEs n'a pas de contrepartie dans le continuum jusqu'à AB ~ 28 sur les images HST et n'aurait donc jamais été vu sur des relevés pointés. Le catalogue final contient plus de 150 LAEs, ce qui nous a permis d'étudier la contribution des objets les plus faibles, ainsi que l'évolution de la pente en fonction du redshift. Les résultats obtenus semblent confirmer que la pente alpha est proche de -2 pour l'ensemble des LAEs compris entre 2.9 < z < 6.7. On observe aussi une évolution de alpha, qui passe de -1.8 à -1.95 entre z ~ 3-4 et z ~ 5-7, un résultat original et non dépendant des données utilisées pour la partie brillante de la FdL. L'intégration de cette FdL permet ensuite de calculer la densité de photons ionisants émis par ces LAEs et de déterminer leur impact relatif sur la réionisation cosmique. A l'avenir, la profondeur de champ atteinte par les données du James Webb Space Telescope (JWST) va repousser la limite de détection de ces galaxies jusqu'à z ~ 8. L'utilisation de spectrographes dans le proche infrarouge comme MOSFIRE/Keck, KMOS/VLT ou le tout récent EMIR/GTC permettent déjà de confirmer des candidats à z >= 7. Cette thèse a apporté des nouvelles contraintes sur la partie faible de la FdL des LAEs pour un redshift allant jusqu'à z ~ 6, un début donc de ce que l'on va pouvoir faire dans les années à venir pour des redshifts de l'ordre de z ~ 7-8. / Significant efforts have been put for the past two decades to understand the formation process of structure in the early Universe. The recent technological advances in the observational field allow for observing galaxies further and further, even the ones responsible for the cosmic reionization which occurred during the first billion years of the Universe. The main goal of this thesis was to impose constraints on the nature and the abundance of the sources responsible of the cosmic reionization. More specifically, the study was focused on the star-forming galaxies that have a Lyman-alpha emission (LAE) between z ~ 3 and 6.7. This thesis has been conducted within the framework of the MUSE consortium, a brand new instrument installed on the VLT in January 2014, as part of the exploitation of the Guaranteed Time (GTO). This thesis work has enabled us to confirm the unrivalled power of MUSE concerning the detection and the study of weak extragalactic sources without any preselection. We have observed four lensing clusters which magnify the incident light and make it possible to detect faint sources, at the expense of a decrease of the volume of the observed Universe. At first we started with the study of the galaxy cluster Abell 1689 in order to build up a methodology we intend to apply on other galaxy clusters. By comparing the volume density of the detected LAEs to the luminosity functions (LF) coming from the literature, we have reached the following conclusion : the slope of the power law from the Schechter function is smaller than alpha <= -1.5, which means that the number of LAEs increases drastically towards the faint luminoities. Then we have applied the new-build method to the other galaxy clusters of our sample observed with MUSE. The LAEs we have detected and measured in this sample are roughly ten times fainter than the ones observed in blank fields thanks to the lensing effect (39 < log(Lya) < 42.5). About one third of them lacks a counterpart in the continuum up to AB ~ 28 on the HST images and couldn't have been seen on targeted surveys. The final catalog includes more than 150 LAEs, this amount has enabled us to study the contribution of the faintest ones and also the evolution of the slope according to the redshift. The results of this work seem to confirm that the slope alpha is close to -2 for all the 2.9 < z < 6.7 LAEs. Furthermore, one can notice the evolution of alpha from -1.8 to -1.95 between z ~ 3-4 and z ~ 5-7, an original result and irrespective of the data set used to complement the present sample towards the bright region of the LF. The integral of the LF allows for working out the ionizing photons density emitted by these LAEs and for determining their relative impact on the cosmic reionization. In the future, the depth of the James Webb Space Telescope (JWST) observations will improve the limits of galaxy detection, certainly up to z ~ 8. The use of near-IR spectrographs such as MOSFIRE/Keck, KMOS/VLT or the very recent EMIR/GTC already provides the confirmation of z >= 7 candidates. This thesis brought new constraints on the faint-end part of the LF of LAEs for a redshift up to z ~ 6, which represents a beginning with respect to all we can do in the coming years for redshifts up to z ~ 7-8.
97

Triaxial galaxy clusters / Amas de galaxies triaxiaux

Bonamigo, Mario 22 September 2016 (has links)
Il est bien établit théoriquement et observationnellement que les amas de galaxies ne sont pas des objets sphèriques, et qu'ils sont beaucoup mieux décrits par la géométrie triaxiale. Malgré cela, les travaux sur la forme tri-dimensionnelle des amas de galaxies sont encore trés rares. L'objet de cette thèse est de contribuer à cette problématique naissante. L'originalité de ce travail est d'aborder ce sujet théoriquement et observationnellement. J'ai mesuré la forme d'amas de galaxies simulés, proposant des prédictions sur la forme des haloes de matière noire. J'ai ensuite développé un algorithme qui se propose de combiner des données en lentilles gravitationnelles et en rayons X afin de contraindre un modèle de haloe triaxial. L'algorithme est testé sur des données simulées. Finalement, je présente l'analyse en rayons X de Abell 1703, qui, combinée avec l'analyse en lentilles gravitationnelles, permettra de déterminer la forme de Abell 1703. / It is well established both theoretically and observationally that galaxy clusters are not spherical objects and that they are much better approximated as triaxial objects. This thesis focusses on the three dimencional shape of galaxy clusters. The originality of my approach is to tackle the problem both theoretically and observationally. First, I have measured the shape of dark matter haloes in the Millenium XXL and Sbarbine simulations, providing predictions for dark matter halo shape over 5 order in magnitude in mass. Then, I have developed an algorithm aimed at fitting simultaneously lensing and X-ray data in order to constrain a triaxial mass distribution. The algorithm is tested and characterized on mock data sets. It is found to be able to recover the input parameters. Finally, I present the X-ray analysis of galaxy cluster Abell 1703, which will be combined with the existing lensing analysis in order to investigate its shape.
98

Identifying Gravitationally Lensed QSO Candidates with eROSITA

Brogan, Róisín O'Rourke January 2020 (has links)
As of June 2020, the first all-sky X-ray survey with the eROSITA instrument aboard the spacecraft Spektr-RG has been completed. A high percentage of the 1.1 million objects included in the survey are expected to be active galactic nuclei (AGN). Such an extensive catalogue of X-ray sources offers a unique opportunity for large scale observations of distinct classes of X-ray emitters. This report explores methods of refining the catalogue to include only candidates for lensed AGN. Of the differing types of AGN known, quasi-stellar objects, or QSOs, are some of the most luminous, meaning they are well-suited for observation over large distances. This is particularly befitting for observation of gravitationally lensed objects as, for lensing effects to take place, large distances are required over which more faint objects would not be able to be viewed. An indication of strong gravitational lensing is several images of the same object seen in close proximity on the sky. In order to reduce the data to more likely candidates, counterparts within a given radius are found in the second data release from Gaia; a survey in the optical with higher resolution than eROSITA. An algorithm is produced which removes most likely stellar Gaia sources using their X-ray to optical flux ratios and astrometry parameters. The Gaia sources which have no neighbours within another given radius are then also removed, leaving a catalogue of potential multiply lensed QSOs. This automated script was then applied to an eROSITA catalogue and the results compared with known lenses. The remaining sources were also checked visually using Pan-STARRS optical survey data. The results seem to be promising, although a great deal further refinement is needed through visual inspection to find the most promising candidates for lensed QSOs. / <p>Written under the joint supervision of Georg Lamer at the Leibniz Institute for Astrophysics in Potsdam. The presentation was held online at the Institute due to the COVID-19 pandemic.</p>
99

Reconstruction libre de lentilles gravitationnelles de type galaxie-galaxie avec les machines à inférence récurentielle

Adam, Alexandre 12 1900 (has links)
Les lentilles gravitationnelles de type galaxie-galaxie se produisent lorsque la lumière d'une galaxie en arrière-plan est déviée par le champ gravitationnel d'une galaxie en avant-plan, formant des images multiples ou même des anneaux d'Einstein selon le point de vue d'un observateur sur Terre. Ces phénomènes permettent non seulement d'étudier les galaxies lointaines, magnifiées par la galaxie-lentille, mais aussi de comprendre la distribution de masse de la galaxie-lentille et de son environnement, une opportunité unique pour sonder la matière noire contenue dans ces galaxies. Or, les méthodes traditionnelles pour analyser ces systèmes requièrent une quantité significative de temps ordinateur (de quelques heures à quelques jours), sans compter le temps des experts pour faire converger les analyses MCMC requises pour obtenir les paramètres d'intérêts. Ce problème est significatif, considérant qu'il est projeté que les grands relevés du ciel comme ceux qui seront menés aux observatoires Rubin et Euclid découvrirons plusieurs centaines de milliers de lentilles gravitationnelles. De plus, le Télescope géant européen (ELT), faisant usage de la technologie d'optique adaptative, et le télescope spatial James Webb, vont nous offrir une vue sans précédent de ces systèmes, avec un pouvoir de résolution qui rendra possible certaines analyses comme la recherche de halo de matière noire froide, longtemps prédite par le modèle cosmologique standard $\Lambda$CDM. Les approximations traditionnelles faites pour simplifier la reconstruction des lentilles gravitationnelles ne seront plus valides dans ce régime. Dans ce mémoire, je présente un travail qui s'attaque à ces deux problèmes. Je présente une méthode d'optimisation basée sur les machines à inférence récurentielle pour reconstruire deux images, soit celle d'une galaxie en arrière-plan et une image pour la distribution de masse de la galaxie en avant-plan. La représentation paramétrique choisie a le potentiel de reconstruire une classe très large de lentilles gravitationnelles, incluant des halos et sous-halos de matière noire, ce qu'on démontre dans ce travail en utilisant des profiles de densité réalistes provenant de la simulation cosmologique hydrodynamique IllustrisTNG. Nos reconstructions atteignent un niveau de réalisme jamais atteint auparavant et s'exécutent sur une fraction du temps requis pour exécuter une analyse traditionnelle, soit un pas significatif vers une méthode pouvant adresser le défi d'analyser autant de systèmes complexes et variés en un temps à l'échelle humaine. / Galaxy-Galaxy gravitational lenses is a phenomenon that happens when the light coming from a background galaxy is bent by the gravitational field of a foreground galaxy, producing multiple images or even Einstein ring images of the background source from the point of view of an observer on Earth. These phenomena allow us to study in detail the morphology of the background galaxy, magnified by the lens, but also study the mass density distribution of the lens and its environment, thus offering a unique probe of dark matter in lensing galaxies. Traditional methods studying these systems often need significant compute time (from hours to days), and this is without taking into account the time spent by experts to make the MCMC chains required to obtain parameters of interest converge. This problem is significant, considering that large surveys from observatories like Rubin and Euclid are projected to discover hundreds of thousands of gravitational lenses. Moreover, the Extremely Large Telescope (ELT), using adaptive optics, and the James Webb Space Telescope will offer an unprecedented glimpse of these systems, with a resolving power predicted to enable searches for cold dark matter subhalos — objects long predicted by the standard cosmological model CDM. Approximations used to make analysis tractable in traditional methods will no longer be valid in that regime. In this thesis, I present a method that aims to address these two issues. The method, based on Recurrent Inference Machines (RIM), reconstructs two pixelated maps, one for the background source and another for the mass density map of the foreground lensing galaxy. This free-form parametric representation has the potential to reconstruct a large class of gravitational lenses, including those with dark matter halos and subhalos, which we demonstrate using realistic mass density profiles from the cosmological hydrodynamic simulation IllustrisTNG. Our method can achieve an unmatched level of realism in a fraction of the time required by traditional methods, which is a significant step toward solving the challenge of studying such a large number of complex and varied systems in a human timescale.
100

Accélération du lentillage gravitationnel à plans multiples par apprentissage profond

Wilson, Charles 04 1900 (has links)
Le "modèle standard" actuel de la cosmologie est celui de ΛCDM, décrivant un Univers en expansion accélérée ainsi qu’une structure de matière sombre froide formée en halos, sur lesquels s’assemblent les galaxies. Malgré les nombreuses confirmations observationnelles de ses prédictions, il existe d’importantes tensions entre les mesures de la distribution de structure sombre aux petites échelles de l’Univers et ce qui serait attendu de ΛCDM. Cependant, ces halos légers de matière sombre, qui sont prédit d’abonder à travers le cosmos, n’hébergent pas de galaxies lumineuses et sont donc très difficiles à observer directement. Leur présence peut toutefois être détectée dans les lentilles gravitationnelles fortes de type galaxie-galaxie, un phénomène se produisant lorsque la lumière d’une galaxie d’arrière-plan est fortement déviée par le champ gravitationnel d’une galaxie d’avantplan, formant des images multiples et des arcs étendus. Les halos distribués en ligne de visée de tels systèmes, ainsi que ceux imbriqués dans la galaxie lentille, peuvent causer des perturbations gravitationnelles dans les images de galaxies lentillées. La détection de ces effets infimes dans des observations de lentilles gravitationnelles est faite par des méthodes statistiques Bayésiennes, qui nécéssitent des centaines de milliers de simulations de la contribution de ces perturbateurs à la déflexion de la lumière. Traditionnellement, la modélisation du lentillage par les halos en ligne de visée s’est faite avec le formalisme du lentillage à plans multiples, qui souffre d’une nature récursive peu efficace. De plus, il est prédit par le modèle ΛCDM que la majorité des systèmes de lentilles gravitationnelles comporteraient davantage de halos en ligne de visée que de sous-halos imbriqués dans la galaxie lentille, motivant une modélisation détaillée des effets de ligne de visée. Dans un contexte d’analyse Bayésienne, l’approche du lentillage à plans multiples représente une échelle de temps de plusieurs jours pour l’analyse d’un seul système. En considérant que des grands relevés du ciel comme ceux de l’Observatoire Vera Rubin et du télescope spatial Euclid sont projetés de découvrir des centaines de milliers de lentilles gravitationnelles, l’effort de contraindre la distribution de matière sombre aux petites échelles se voit confronté à ce qui pourrait être un insurmontable problème de temps de calcul. Dans ce mémoire, je présente le développement d’un nouveau formalisme de modélisation du lentillage gravitationnel par halos en ligne de visée accéléré par des réseaux de neurones, motivé par les lacunes du lentillage à plans multiples et l’importance scientifique de la modélisation de ces effets. Les architectures de ces réseaux, conçues dans le cadre de ce travail, sont basées sur le mécanisme d’attention, et peuvent être conditionnées sur des ensembles de modèles de halos en ligne de visée afin de produire les angles de déflexion leur étant associés. Ce formalisme offre la flexibilité requise pour remplacer celui du lentillage à plans multiples, laissant à l’usager la liberté de spécifier un modèle de lentille principale et étant compatible avec des grilles de pixels de taille quelconque. Notre formalisme permet d’accélérer la modélisation du lentillage de ligne de visée par presque deux ordres de grandeur lorsque comparé au lentillage à plans multiples, et promet d’atteindre une exactitude lui étant comparable dans des développements futurs. Il s’agit d’une contribution significative à l’étude de la matière sombre aux petites échelles, qui permettra soit de réconcilier ΛCDM et les observations, ou mènera à l’adoption d’un modèle cosmologique alternatif. / The current "standard model" of cosmology is that of ΛCDM, describing a Universe undergoing accelerated expansion with a structure of cold dark matter formed into halos, onto which are assembled galaxies. Despite the numerous observational confirmations of its predictions, there remains some important tensions between measures of the distribution of dark structure on small scales of the Universe and what would be expected from ΛCDM. However, these light dark matter halos, predicted to be adundant throughout the cosmos, are not hosts of luminous galaxies and are therefore very difficult to observe directly. Yet, their presence can still be detected in galaxy-galaxy type strong gravitational lenses, a phenomenon occuring when the light of a background galaxy is strongly deflected by the gravitational field of a foreground galaxy, forming multiple images and extended arcs. Halos distributed along the line-of-sight of such systems, as well as those nested within the lens galaxy, can introduce gravitational perturbations in images of lensed galaxies. The detection of such infinitesimal effects in strong lensing observations is made with methods relying on Bayesian statistics, which require hundreds of thousands of simulations of the contribution of these perturbers to the deflection of light. Traditionally, modeling the lensing from line-of-sight halos has been done with the multi-plane lensing framework, which suffers from its inefficient recursive nature. Morevoer, the ΛCDM model predicts that most gravitational lens systems would host a larger amount of line-of-sight halos than subhalos nested within the lens galaxy, motivating a detailed modeling of line-of-sight effects. In a Bayesian analysis context, the multi-plane lensing approach represents a timescale of multiple days for the analysis of a single system. Considering that large sky surveys such as those of the Vera Rubin Observatory and the Euclid space telescope are projected to discover hundreds of thousands of gravitational lenses, the effort of constraining the small-scale distribution of dark matter is confronted to what might seem like an insurmountable problem of computation time. In this thesis, I present the development of a new neural-network-accelerated framework for modeling the gravitational lensing by line-of-sight halos, motivated by the shortcomings of multiplane lensing and the scientific importance of modeling these effects. The architectures of these networks, conceived as part of this work, are based on the attention mechanism, and can be conditioned on sets of line-of-sight halo models in order to produce their associated deflection angles. This framework offers the flexibility required to replace that of multi-plane lensing, leaving up to the user the freedom to specify a main lens model and being compatible with pixel grids of any size. Our framework allows to accelerate the modeling of line-of-sight lensing by nearly two orders of magnitude relative to multi-plane lensing, and promises to reach a comparable accuracy in future developments. This constitutes a significative contribution to the study of dark matter on small scales, which will either lead to the reconciliation of ΛCDM and observations, or the adoption of an alternate cosmological model.

Page generated in 0.3101 seconds