Spelling suggestions: "subject:"strong gravitational biosensing"" "subject:"strong gravitational condensing""
1 |
The gravitationally lensed galaxy IRAS FSC10214+4724Deane, Roger Paul January 2013 (has links)
We present a multi-wavelength analysis of IRAS FSC10214+4724 from radio to X-ray wavelengths. This is a gravitationally lensed galaxy at a redshift z=2.3 (3 Gyr after the Big Bang) which hosts prodigious star formation as well as an obscured active nucleus. We derive a new lens model for the system employing a Bayesian Markov Chain Monte Carlo algorithm with extended-source, forward ray-tracing. An array of spatially resolved maps (radio, millimetre, near-infrared, optical) trace different physical components which enables a high resolution, multi-wavelength view of a high-redshift galaxy beyond the capabilities of current telescopes. The spatially-resolved molecular gas total intensity and velocity maps reveal a reasonably ordered system, however there is evidence for minor merger activity. We show evidence for an extended, low-excitation gas reservoir that either contains roughly half the total gas mass or has a different CO-to-H_2 conversion ratio. Very Long Baseline Interferometry (VLBI) is used to detect what we argue to be the obscured active nucleus with an effective angular resolution of <50 pc at z=2.3. The source plane inversion places the VLBI detection to within milli-arcseconds of the modeled cusp caustic, resulting in a very large magnification (mu > 70) which is over an order of magnitude larger than the derived co magnification. This implies an equivalent magnification difference between the starburst and AGN components, yielding significant distortion to the global continuum spectral energy distribution (SED). A primary result of this work is therefore the demonstration that emission regions of differing size and position within a galaxy can experience significantly different magnification factors (> 1 dex) and therefore distort our view of high-redshift, gravitationally lensed sources. This not only raises caution against unsophisticated uses of IRAS FSC10214+4724 as an archetype high-redshift Ultra-Luminous Infra-Red Galaxy (ULIRG), but also against statistical deductions based on samples of strong lenses with poorly constrained lens models and spatially-unresolved detections. Analogous to the continuum SED distortion quantified in this thesis, we predict a distortion of the CO spectral line energy distribution of IRAS FSC10214+4724 where higher order J lines, that are increasingly excited by the AGN and shock heating from the central starburst, will be preferentially lensed owing to their smaller solid angles and closer proximity to the AGN, and therefore the cusp of the caustic. This distortion is predicted to affect many high redshift lenses and will be tested most synergistically by the Jansky Very Large Array (JVLA) and the Atacama Large Millimetre Telescope (ALMA).
|
2 |
The evolution of dark and luminous structure in massive early-type galaxiesOldham, Lindsay Joanna January 2017 (has links)
In this thesis, I develop and combine strong lensing and dynamical probes of the mass of early-type galaxies (ETGs) in order to improve our understanding of their dark and luminous mass structure and evolution. Firstly, I demonstrate that the dark matter halo of our nearest brightest cluster galaxy (BCG), M87, is centrally cored relative to the predictions of dark-matter-only models, and suggest an interpretation of this result in terms of dynamical heating due to the infall of satellite galaxies. Conversely, I find that the haloes of a sample of 12 field ETGs are strongly cusped, consistent with adiabatic contraction models due to the initial infall of gas. I suggest an explanation for these differences in which the increased rate of merging and accretion experienced by ETGs in dense environments leads to increased amounts of halo heating and expansion, such that the signature of the halo's initial contraction is erased in BCGs but retained in more isolated systems. Secondly, I find evidence that the stellar-mass-to-light ratio declines with increasing radius in both field and cluster ETGs. With M87, I show that the strength of this gradient cannot be explained by trends in stellar metallicity or age if the stellar initial mass function (IMF) is spatially uniform, but that an IMF which becomes increasing bottom-heavy towards the galaxy centre can fully reproduce the inference on the stellar mass. Finally, I use the sizes, stellar masses and luminous structures of two samples of massive ETGs at redshift z ~ 0.6 to set constraints on the mechanisms of ETG growth. I find that ETGs in dense cluster environments already lie on the local size-mass relation at this redshift, contrary to their isolated counterparts, and suggest that this may be evidence for their accelerated growth at early times due to the higher incidence of merger events in clusters. I also show that massive compact ETGs at this redshift are composed of a compact, red, spheroidal core surrounded by a more extended, diffuse, bluer envelope, which may be a structural imprint of their ongoing inside-out growth. Overall, the studies presented in this thesis suggest a coherent scenario for ETG evolution which is dominated by hierarchical processes.
|
3 |
A search for strong gravitational lenses in early-type galaxies using UKIDSSHusnindriani, Prahesti January 2015 (has links)
This work is focused on a search for strong gravitational lenses in early-type galaxies (ETGs). The total number of samples is 4,706 galaxies encompassing a magnitude range 15.0 < i < 18.0 and colour 3.5 < (u-r) < 5.0. Two databases were employed as the source of K-band images (UKIDSS Large Area Survey) and g, r, i images (SDSS). All samples were fitted to a Sersic component and automatically processed using GALFIT (Peng et al. 2002; Peng et al. 2010) inside a Python script (Appendix A). The first classification generated 259 galaxies which are seen as single galaxies in their K-band images. These galaxies were then reclassified based on image contouring in g, r, i, and K filters and therefore resulted in three categories of samples: Sample A (99 galaxies), Sample B (96 galaxies), and Sample C (64 galaxies).
|
4 |
Supernovae seen through gravitational telescopesPetrushevska, Tanja January 2017 (has links)
Galaxies, and clusters of galaxies, can act as gravitational lenses and magnify the light of objects behind them. The effect enables observations of very distant supernovae, that otherwise would be too faint to be detected by existing telescopes, and allows studies of the frequency and properties of these rare phenomena when the universe was young. Under the right circumstances, multiple images of the lensed supernovae can be observed, and due to the variable nature of the objects, the difference between the arrival times of the images can be measured. Since the images have taken different paths through space before reaching us, the time-differences are sensitive to the expansion rate of the universe. One class of supernovae, Type Ia, are of particular interest to detect. Their well known brightness can be used to determine the magnification, which can be used to understand the lensing systems. In this thesis, galaxy clusters are used as gravitational telescopes to search for lensed supernovae at high redshift. Ground-based, near-infrared and optical search campaigns are described of the massive clusters Abell 1689 and 370, which are among the most powerful gravitational telescopes known. The search resulted in the discovery of five photometrically classified, core-collapse supernovae at redshifts of 0.671<z<1.703 with significant magnification from the cluster. Owing to the power of the lensing cluster, the volumetric core-collapse supernova rates for 0.4 ≤ z < 2.9 were calculated, and found to be in good agreement with previous estimates and predictions from cosmic star formation history. During the survey, two Type Ia supernovae in A1689 cluster members were also discovered, which allowed the Type Ia explosion rate in galaxy clusters to be estimated. Furthermore, the expectations of finding lensed supernovae at high redshift in simulated search campaigns that can be conducted with upcoming ground- and space-based telescopes, are discussed. Magnification from a galaxy lens also allows for detailed studies of the supernova properties at high redshift that otherwise would not be possible. Spectroscopic observations of lensed high-redshift supernovae Type Ia are of special interest since they can be used to test for evolution of the standard candle nature of these objects. If systematic redshift-dependent properties are found, their utility for future surveys could be challenged. In the thesis it is shown that the strongly lensed and very distant supernova Type Ia PS1-10afx at z=1.4, does not deviate from the well-studied nearby and intermediate populations of normal supernovae Type Ia. In a different study, the discovery of the first resolved multiply-imaged gravitationally lensed supernova Type Ia is also reported. / <p>At the time of the doctoral defense, the following papers were unpublished and had a status as follows: Paper 3: Manuscript. Paper 4: Manuscript.</p>
|
5 |
Measuring angular diameter distances in the universe by Baryon Acoustic Oscillation and strong gravitational lensingJee, Inh 2013 August 1900 (has links)
We discuss two ways of measuring angular diameter distances in the Universe: (i) Baryon Acoustic Oscillation (BAO) , and (ii) strong gravitational lensing. For (i), we study the effects of survey geometry and selection functions on the 2-point correlation function of Lyman- alpha emitters in 1.9 < z < 3.5 for Hobby-Eberly Telescope Dark Energy Experiment (HETDEX). We develop a method to extract the BAO scale (hence a volume-averaged angular diameter distance D_V, which is a combination of the angular diameter distance and the Hubble expansion rate, i.e., [cz〖(1+z)〗^2 〖D_A〗^2 H^(-1) ]^(1/3)) from a spherically averaged 1-d correlation function. We quantify the statistical errors on such measurements. By using log-normal realizations of the HETDEX dataset, we show that we can determine DV from HETDEX at 2% accuracy using the 2-point correlation function. This study is complementary to the on-going effort to characterize the power spectrum using HETDEX. For (ii), a previous study (Para ficz and Hjorth 2009) looked at the case of a spherical lens following a singular isothermal distribution of matter and an isotropic velocity distribution, and found that combining measurements of the Einstein ring radius with the time delay of a strong lens system directly leads to a measurement of the angular diameter distance, D_A. Since this is a very new method, it requires more careful investigations of various real-world eff ects such as a realistic matter density pro file, an anisotropic velocity distribution, and external convergence. In more realistic lens confi gurations we find that the velocity dispersion is the dominant source of the uncertainty ; in order for this method to achieve competitive precision on measurements of DA, we need to constrain the velocity dispersion, down to the percent level. On the other hand, external convergence and velocity dispersion anisotropy have negligible e ect on our result. However, we also claim that the dominant source of the uncertainty depends largely on the image con figuration of the system, which leads us to the conclusion that studying the angular dependence of the lens mass distribution is a necessary component. / text
|
6 |
Modélisation précise d’amas de galaxies massifs observés par Hubble et MUSE / Precise modeling of massive galaxies clusters observed by Hubble and MUSEMahler, Guillaume 09 October 2017 (has links)
Les amas de galaxies sont des structures massives composées à plus de 80% de matière noire. Leur coeur peut atteindre une densité de masse critique qui en déformant l'espace-temps fait converger les rayons lumineux vers l'observateur. Grâce à des relevés photométriques profonds de l'amas Abell 2744, de nombreux systèmes multiples ont été découverts. Identifier ces systèmes reste un défi, j'ai donc développé une méthode robuste basée sur les propriétés photométriques conservées par l'effet de lentille gravitationnelle qui permet de les détecter automatiquement. Le meilleur moyen de prouver que des images proviennent de la même galaxie reste la mesure de leur distance(redshifts) grâce à leur spectre. En analysant les données collectées par le spectrographe à intégrale de champ MUSE j'ai mesuré un grand nombre de sources (514) dont 83 d'entre elles sont des images multiples. Bénéficiant de cette large couverture spectrale, j'ai créé un modèle paramétrique de masse parmi les plus contraints à ce jour. La sensibilité atteinte par le modèle permet de sonder l'influence de structures périphériques (jusqu'à une distance de 700kpc), révélant ainsi des erreurs systématiques sur la mesure de la masse due à la paramétrisation du modèle (6%). Comparé aux précédentes études, on voit une diminution de 10% de la masse dans un rayon 100 kpc montrant ainsi en partie le gain offert par la spectroscopie. Ce gain, bien que négligeable sur la mesure de l'amplification, s'est avéré pouvoir contraindre la balance en masse entre les différentes composantes de notre modèle, dépassant par endroits 2 fois l'incertitude statistique / Clusters of galaxies are large and massive structures containing more than 80% of dark matter. In the cluster core, the mass density can reach a critical threshold making the curvature of space-time large enough to bend light path and then allow multiple convergence of images from the same sources to appear on the observer field of view. Thanks to deep photometric coverage of Abell 2744, a lot of multiply-imaged systems were discovered. Nevertheless, finding them remain a challenge and based on the preserved photometric properties by lensing, I developed a robust method to automatically find them. However, measuring the redshifts for each multiple images remains the best way to surely associate them. The deep coverage of the integral field spectrograph MUSE allowed me to identify a large number of sources ( 514 ) among them 83 were multiple images. Thanks to this large spectroscopic coverage, I built one of the most constrained parametric mass model for lensing cluster to date. The sensitivity raised by this model allow me to probe the influence of outskirts substructures ( at 700 kpc distance ), revealing systematic sources of uncertainties related to the mass model parametrisation ( 6% ). Compared to previous studies, I notice a 10% lower mass in the center ( within 100kpc ) showing one of the benefit of large spectroscopic constraints. This benefit, is smaller on the amplification estimation but shows a significant discrepancy between different mass counterparts in the models, up to 2 times the statistical uncertainties
|
7 |
Accélération du lentillage gravitationnel à plans multiples par apprentissage profondWilson, Charles 04 1900 (has links)
Le "modèle standard" actuel de la cosmologie est celui de ΛCDM, décrivant un Univers en expansion
accélérée ainsi qu’une structure de matière sombre froide formée en halos, sur lesquels s’assemblent
les galaxies. Malgré les nombreuses confirmations observationnelles de ses prédictions, il existe
d’importantes tensions entre les mesures de la distribution de structure sombre aux petites échelles
de l’Univers et ce qui serait attendu de ΛCDM. Cependant, ces halos légers de matière sombre,
qui sont prédit d’abonder à travers le cosmos, n’hébergent pas de galaxies lumineuses et sont donc
très difficiles à observer directement. Leur présence peut toutefois être détectée dans les lentilles
gravitationnelles fortes de type galaxie-galaxie, un phénomène se produisant lorsque la lumière
d’une galaxie d’arrière-plan est fortement déviée par le champ gravitationnel d’une galaxie d’avantplan, formant des images multiples et des arcs étendus. Les halos distribués en ligne de visée de
tels systèmes, ainsi que ceux imbriqués dans la galaxie lentille, peuvent causer des perturbations
gravitationnelles dans les images de galaxies lentillées. La détection de ces effets infimes dans des
observations de lentilles gravitationnelles est faite par des méthodes statistiques Bayésiennes, qui
nécéssitent des centaines de milliers de simulations de la contribution de ces perturbateurs à la
déflexion de la lumière. Traditionnellement, la modélisation du lentillage par les halos en ligne de
visée s’est faite avec le formalisme du lentillage à plans multiples, qui souffre d’une nature récursive
peu efficace. De plus, il est prédit par le modèle ΛCDM que la majorité des systèmes de lentilles
gravitationnelles comporteraient davantage de halos en ligne de visée que de sous-halos imbriqués
dans la galaxie lentille, motivant une modélisation détaillée des effets de ligne de visée. Dans un
contexte d’analyse Bayésienne, l’approche du lentillage à plans multiples représente une échelle de
temps de plusieurs jours pour l’analyse d’un seul système. En considérant que des grands relevés
du ciel comme ceux de l’Observatoire Vera Rubin et du télescope spatial Euclid sont projetés de
découvrir des centaines de milliers de lentilles gravitationnelles, l’effort de contraindre la distribution
de matière sombre aux petites échelles se voit confronté à ce qui pourrait être un insurmontable
problème de temps de calcul. Dans ce mémoire, je présente le développement d’un nouveau formalisme de modélisation du
lentillage gravitationnel par halos en ligne de visée accéléré par des réseaux de neurones, motivé
par les lacunes du lentillage à plans multiples et l’importance scientifique de la modélisation de
ces effets. Les architectures de ces réseaux, conçues dans le cadre de ce travail, sont basées sur
le mécanisme d’attention, et peuvent être conditionnées sur des ensembles de modèles de halos en
ligne de visée afin de produire les angles de déflexion leur étant associés. Ce formalisme offre la
flexibilité requise pour remplacer celui du lentillage à plans multiples, laissant à l’usager la liberté
de spécifier un modèle de lentille principale et étant compatible avec des grilles de pixels de taille
quelconque. Notre formalisme permet d’accélérer la modélisation du lentillage de ligne de visée
par presque deux ordres de grandeur lorsque comparé au lentillage à plans multiples, et promet
d’atteindre une exactitude lui étant comparable dans des développements futurs. Il s’agit d’une
contribution significative à l’étude de la matière sombre aux petites échelles, qui permettra soit de
réconcilier ΛCDM et les observations, ou mènera à l’adoption d’un modèle cosmologique alternatif. / The current "standard model" of cosmology is that of ΛCDM, describing a Universe undergoing
accelerated expansion with a structure of cold dark matter formed into halos, onto which are
assembled galaxies. Despite the numerous observational confirmations of its predictions, there
remains some important tensions between measures of the distribution of dark structure on small
scales of the Universe and what would be expected from ΛCDM. However, these light dark matter
halos, predicted to be adundant throughout the cosmos, are not hosts of luminous galaxies and are
therefore very difficult to observe directly. Yet, their presence can still be detected in galaxy-galaxy
type strong gravitational lenses, a phenomenon occuring when the light of a background galaxy is
strongly deflected by the gravitational field of a foreground galaxy, forming multiple images and
extended arcs. Halos distributed along the line-of-sight of such systems, as well as those nested
within the lens galaxy, can introduce gravitational perturbations in images of lensed galaxies. The
detection of such infinitesimal effects in strong lensing observations is made with methods relying on
Bayesian statistics, which require hundreds of thousands of simulations of the contribution of these
perturbers to the deflection of light. Traditionally, modeling the lensing from line-of-sight halos
has been done with the multi-plane lensing framework, which suffers from its inefficient recursive
nature. Morevoer, the ΛCDM model predicts that most gravitational lens systems would host
a larger amount of line-of-sight halos than subhalos nested within the lens galaxy, motivating a
detailed modeling of line-of-sight effects. In a Bayesian analysis context, the multi-plane lensing
approach represents a timescale of multiple days for the analysis of a single system. Considering
that large sky surveys such as those of the Vera Rubin Observatory and the Euclid space telescope
are projected to discover hundreds of thousands of gravitational lenses, the effort of constraining
the small-scale distribution of dark matter is confronted to what might seem like an insurmountable
problem of computation time. In this thesis, I present the development of a new neural-network-accelerated framework for
modeling the gravitational lensing by line-of-sight halos, motivated by the shortcomings of multiplane lensing and the scientific importance of modeling these effects. The architectures of these
networks, conceived as part of this work, are based on the attention mechanism, and can be
conditioned on sets of line-of-sight halo models in order to produce their associated deflection
angles. This framework offers the flexibility required to replace that of multi-plane lensing, leaving
up to the user the freedom to specify a main lens model and being compatible with pixel grids of
any size. Our framework allows to accelerate the modeling of line-of-sight lensing by nearly two
orders of magnitude relative to multi-plane lensing, and promises to reach a comparable accuracy
in future developments. This constitutes a significative contribution to the study of dark matter on
small scales, which will either lead to the reconciliation of ΛCDM and observations, or the adoption
of an alternate cosmological model.
|
8 |
Estimateur neuronal de ratio pour l'inférence de la constante de Hubble à partir de lentilles gravitationnelles fortesCampeau-Poirier, Ève 12 1900 (has links)
Les deux méthodes principales pour mesurer la constante de Hubble, soit le taux d’expansion
actuel de l’Univers, trouvent des valeurs différentes. L’une d’elle s’appuie lourdement sur le
modèle cosmologique aujourd’hui accepté pour décrire le cosmos et l’autre, sur une mesure
directe. Le désaccord éveille donc des soupçons sur l’existence d’une nouvelle physique en
dehors de ce modèle. Si une autre méthode, indépendante des deux en conflit, soutenait une
des deux valeurs, cela orienterait les efforts des cosmologistes pour résoudre la tension.
Les lentilles gravitationnelles fortes comptent parmi les méthodes candidates. Ce phénomène
se produit lorsqu’une source lumineuse s’aligne avec un objet massif le long de la ligne de
visée d’un télescope. La lumière dévie de sa trajectoire sur plusieurs chemins en traversant
l’espace-temps déformé dans le voisinage de la masse, résultant en une image déformée, gros-
sie et amplifiée. Dans le cas d’une source lumineuse ponctuelle, deux ou quatre images se
distinguent nettement. Si cette source est aussi variable, une de ses fluctuations apparaît à
différents moments sur chaque image, puisque chaque chemin a une longueur différente. Le
délai entre les signaux des images dépend intimement de la constante de Hubble.
Or, cette approche fait face à de nombreux défis. D’abord, elle requiert plusieurs jours à des
spécialistes pour exécuter la méthode de Monte-Carlo par chaînes de Markov (MCMC) qui
évalue les paramètres d’un seul système de lentille à la fois. Avec les détections de milliers
de systèmes prévues par l’observatoire Rubin dans les prochaines années, cette approche est
inconcevable. Elle introduit aussi des simplifications qui risquent de biaiser l’inférence, ce qui
contrevient à l’objectif de jeter la lumière sur le désaccord entre les mesures de la constante
de Hubble.
Ce mémoire présente une stratégie basée sur l’inférence par simulations pour remédier à ces
problèmes. Plusieurs travaux antérieurs accélèrent la modélisation de la lentille grâce à l’ap-
prentissage automatique. Notre approche complète leurs efforts en entraînant un estimateur
neuronal de ratio à déterminer la distribution de la constante de Hubble, et ce, à partir des
produits de la modélisation et des mesures de délais. L’estimateur neuronal de ratio s’exécute
rapidement et obtient des résultats qui concordent avec ceux de l’analyse traditionnelle sur
des simulations simples, qui ont une cohérence statistique acceptable et qui sont non-biaisés. / The two main methods to measure the Hubble constant, the current expansion rate of the
Universe, find different values. One of them relies heavily on today’s accepted cosmological
model describing the cosmos and the other, on a direct measurement. The disagreement
thus arouses suspicions about the existence of new physics outside this model. If another
method, independent of the two in conflict, supported one of the two values, it would guide
cosmologists’ efforts to resolve the tension.
Strong gravitational lensing is among the candidate methods. This phenomenon occurs when
a light source aligns with a massive object along a telescope line of sight. When crossing the
curved space-time in the vicinity of the mass, the light deviates from its trajectory on several
paths, resulting in a distorted and magnified image. In the case of a point light source, two
or four images stand out clearly. If this source is also variable, the luminosity fluctuations
will appear at different moments on each image because each path has a different length.
The time delays between the image signals depend intimately on the Hubble constant.
This approach faces many challenges. First, it requires several days for specialists to perform
the Markov Chain Monte-Carlo (MCMC) which evaluates the parameters of a single lensing
system at a time. With the detection of thousands of lensing systems forecasted by the
Rubin Observatory in the coming years, this method is inconceivable. It also introduces
simplifications that risk biasing the inference, which contravenes the objective of shedding
light on the discrepancy between the Hubble constant measurements.
This thesis presents a simulation-based inference strategy to address these issues. Several
previous studies have accelerated the lens modeling through machine learning. Our approach
complements their efforts by training a neural ratio estimator to determine the distribution of
the Hubble constant from lens modeling products and time delay measurements. The neural
ratio estimator results agree with those of the traditional analysis on simple simulations, have
an acceptable statistical consistency, are unbiased, and are obtained significantly faster.
|
Page generated in 0.16 seconds