• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 5
  • 2
  • Tagged with
  • 7
  • 3
  • 3
  • 3
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
1

Inferring the photometric and size evolution of galaxies from image simulations / Inférence de l'évolution photométrique et en taille des galaxies au moyen d'images simulées

Carassou, Sébastien 20 October 2017 (has links)
Les contraintes actuelles sur l'évolution en luminosité et en taille des galaxies dépendent de catalogues multi-bandes extraits de relevés d'imagerie. Mais ces catalogues sont altérés par des effets de sélection difficiles à modéliser et pouvant mener à des résultats contradictoires s'ils ne sont pas bien pris en compte. Dans cette thèse nous avons développé une nouvelle méthode pour inférer des contraintes robustes sur les modèles d'évolution des galaxies. Nous utilisons un modèle empirique générant une distribution de galaxies synthétiques à partir de paramètres physiques. Ces galaxies passent par un simulateur d'image émulant les propriétés instrumentales de n'importe quel relevé et sont extraites de la même façon que les données observées pour une comparaison directe. L'écart entre vraies et fausses données est minimisé via un échantillonnage basé sur des chaînes de Markov adaptatives. A partir de donnée synthétiques émulant les propriétés du Canada-France-Hawaii Telescope Legacy Survey (CFHTLS) Deep, nous démontrons la cohérence interne de notre méthode en inférant les distributions de taille et de luminosité et leur évolution de plusieurs populations de galaxies. Nous comparons nos résultats à ceux obtenus par la méthode classique d'ajustement de la distribution spectrale d'énergie (SED) et trouvons que notre pipeline infère efficacement les paramètres du modèle en utilisant seulement 3 filtres, et ce plus précisément que par ajustement de la SED à partir des mêmes observables. Puis nous utilisons notre pipeline sur une fraction d'un champ du CFHTLS Deep pour contraindre ces mêmes paramètres. Enfin nous soulignons le potentiel et les limites de cette méthode. / Current constraints on the luminosity and size evolution of galaxies rely on catalogs extracted from multi-band surveys. However resulting catalogs are altered by selection effects difficult to model and that can lead to conflicting predictions if not taken into account properly. In this thesis we have developed a new approach to infer robust constraints on model parameters. We use an empirical model to generate a set of mock galaxies from physical parameters. These galaxies are passed through an image simulator emulating the instrumental characteristics of any survey and extracted in the same way as from observed data for direct comparison. The difference between mock and observed data is minimized via a sampling process based on adaptive Monte Carlo Markov Chain methods. Using mock data matching most of the properties of a Canada-France-Hawaii Telescope Legacy Survey Deep (CFHTLS Deep) field, we demonstrate the robustness and internal consistency of our approach by inferring the size and luminosity functions and their evolution parameters for realistic populations of galaxies. We compare our results with those obtained from the classical spectral energy distribution (SED) fitting method, and find that our pipeline infers the model parameters using only 3 filters and more accurately than SED fitting based on the same observables. We then apply our pipeline to a fraction of a real CFHTLS Deep field to constrain the same set of parameters in a way that is free from systematic biases. Finally, we highlight the potential of this technique in the context of future surveys and discuss its drawbacks.
2

Lights in Dark Places: Inferring the Milky Way Mass Profile using Galactic Satellites and Hierarchical Bayes

Eadie, Gwendolyn 11 1900 (has links)
Despite valiant effort by astronomers, the mass of the Milky Way (MW) Galaxy is poorly constrained, with estimates varying by a factor of two. A range of techniques have been developed and different types of data have been used to estimate the MW’s mass. One of the most promising and popular techniques is to use the velocity and position information of satellite objects orbiting the Galaxy to infer the gravitational potential, and thus the total mass. Using these satellites, or Galactic tracers, presents a number of challenges: 1) much of the tracer velocity data are incomplete (i.e. only line-of-sight velocities have been measured), 2) our position in the Galaxy complicates how we quantify measurement uncertainties of mass estimates, and 3) the amount of available tracer data at large distances, where the dark matter halo dominates, is small. The latter challenge will improve with current and upcoming observational programs such as Gaia and the Large Synoptic Survey Telescope (LSST), but to properly prepare for these data sets we must overcome the former two. In this thesis work, we have created a hierarchical Bayesian framework to estimate the Galactic mass profile. The method includes incomplete and complete data simultaneously, and incorporates measurement uncertainties through a measurement model. The physical model relies on a distribution function for the tracers that allows the tracer and dark matter to have different spatial density profiles. When the hierarchical Bayesian model is confronted with the kinematic data from satellites, a posterior distribution is acquired and used to infer the mass and mass profile of the Galaxy. This thesis walks through the incremental steps that led to the development of the hierarchical Bayesian method, and presents MW mass estimates when the method is applied to the MW’s globular cluster population. Our best estimate of the MW’s virial mass is 0.87 (0.67, 1.09) x 10^(12) solar masses. We also present preliminary results from a blind test on hydrodynamical, cosmological computer-simulated MW-type galaxies from the McMaster Unbiased Galaxy Simulations. These results suggest our method may be able to reliably recover the virial mass of the Galaxy. / Thesis / Doctor of Philosophy (PhD)
3

Estimação de funções do redshift de galáxias com base em dados fotométricos / Galaxies redshift function estimation using photometric data

Ferreira, Gretta Rossi 18 September 2017 (has links)
Em uma quantidade substancial de problemas de astronomia, tem-se interesse na estimação do valor assumido, para diversas funções g, de alguma quantidade desconhecida z ∈ ℜ com base em covariáveis x ∈ ℜd. Isto é feito utilizando-se uma amostra (X1, Z1), ... (Xn, Zn). As duas abordagens usualmente utilizadas para resolver este problema consistem em (1) estimar a regressão de Z em x, e plugar esta na função g ou (2)estimar a densidade condicional f (z Ι x) e plugá-la em ∫ g(z) f (z Ι x)dz. Infelizmente, poucos estudos apresentam comparações quantitativas destas duas abordagens. Além disso, poucos métodos de estimação de densidade condicional tiveram seus desempenhos comparados nestes problemas. Em vista disso, o objetivo deste trabalho é apresentar diversas comparações de técnicas de estimação de funções de uma quantidade desconhecida. Em particular, damos destaque para métodos não paramétricos. Além dos estimadores (1) e (2), propomos também uma nova abordagem que consistem em estimar diretamente a função de regressão de g(Z) em x. Essas abordagens foram testadas em diferentes funções nos conjuntos de dados DEEP2 e Sheldon 2012. Para quase todas as funções testadas, o estimador (1) obteve os piores resultados, exceto quando utilizamos florestas aleatórias. Em diversos casos, a nova abordagem proposta apresentou melhores resultados, assim como o estimador (2). Em particular, verificamos que métodos via florestas aleatórias, em geral, levaram a bons resultados. / In a substantial a mount of astronomy problems, we are interested in estimating values assumed of some unknown quantity z ∈ ℜ, for many function g, based on covariates x ∈ ℜd. This is made using a sample (X1, Z1), ..., (Xn, Zn). Two approaches that are usually used to solve this problem consist in (1) estimating a regression function of Z in x and plugging it into the g or (2) estimating a conditional density f (z Ι x) and plugging it into ∫ g(z) f (z Ι x)dz. Unfortunately, few studies exhibit quantitative comparisons between these two approaches.Besides that, few conditional density estimation methods had their performance compared in these problems.In view of this, the objective of this work is to show several comparisons of techniques used to estimate functions of unknown quantity. In particular we highlight nonparametric methods. In addition to estimators (1) and (2), we also propose a new ap proach that consists in directly estimating the regression function from g(Z) on x. These approaches were tested in different functions in the DEEP 2 and Sheldon 2012 datasets. For almost all the functions tested, the estimator (1) obtained the worst results, except when we use the random forests methods. In several cases, the proposed new approach presented better results, as well as the estimator (2) .In particular, we verified that random forests methods generally present to good results.
4

Estimação de funções do redshift de galáxias com base em dados fotométricos / Galaxies redshift function estimation using photometric data

Gretta Rossi Ferreira 18 September 2017 (has links)
Em uma quantidade substancial de problemas de astronomia, tem-se interesse na estimação do valor assumido, para diversas funções g, de alguma quantidade desconhecida z ∈ ℜ com base em covariáveis x ∈ ℜd. Isto é feito utilizando-se uma amostra (X1, Z1), ... (Xn, Zn). As duas abordagens usualmente utilizadas para resolver este problema consistem em (1) estimar a regressão de Z em x, e plugar esta na função g ou (2)estimar a densidade condicional f (z Ι x) e plugá-la em ∫ g(z) f (z Ι x)dz. Infelizmente, poucos estudos apresentam comparações quantitativas destas duas abordagens. Além disso, poucos métodos de estimação de densidade condicional tiveram seus desempenhos comparados nestes problemas. Em vista disso, o objetivo deste trabalho é apresentar diversas comparações de técnicas de estimação de funções de uma quantidade desconhecida. Em particular, damos destaque para métodos não paramétricos. Além dos estimadores (1) e (2), propomos também uma nova abordagem que consistem em estimar diretamente a função de regressão de g(Z) em x. Essas abordagens foram testadas em diferentes funções nos conjuntos de dados DEEP2 e Sheldon 2012. Para quase todas as funções testadas, o estimador (1) obteve os piores resultados, exceto quando utilizamos florestas aleatórias. Em diversos casos, a nova abordagem proposta apresentou melhores resultados, assim como o estimador (2). Em particular, verificamos que métodos via florestas aleatórias, em geral, levaram a bons resultados. / In a substantial a mount of astronomy problems, we are interested in estimating values assumed of some unknown quantity z ∈ ℜ, for many function g, based on covariates x ∈ ℜd. This is made using a sample (X1, Z1), ..., (Xn, Zn). Two approaches that are usually used to solve this problem consist in (1) estimating a regression function of Z in x and plugging it into the g or (2) estimating a conditional density f (z Ι x) and plugging it into ∫ g(z) f (z Ι x)dz. Unfortunately, few studies exhibit quantitative comparisons between these two approaches.Besides that, few conditional density estimation methods had their performance compared in these problems.In view of this, the objective of this work is to show several comparisons of techniques used to estimate functions of unknown quantity. In particular we highlight nonparametric methods. In addition to estimators (1) and (2), we also propose a new ap proach that consists in directly estimating the regression function from g(Z) on x. These approaches were tested in different functions in the DEEP 2 and Sheldon 2012 datasets. For almost all the functions tested, the estimator (1) obtained the worst results, except when we use the random forests methods. In several cases, the proposed new approach presented better results, as well as the estimator (2) .In particular, we verified that random forests methods generally present to good results.
5

Time Series Analysis of the A0 Supergiant HR 1040

Corliss, David J. 11 July 2013 (has links)
No description available.
6

Astrostatistics: Statistical Analysis of Solar Activity from 1939 to 2008

Yousef, Mohammed A. 10 April 2014 (has links)
No description available.
7

A Comparison of Flare Forecasting Methods. IV. Evaluating Consecutive-day Forecasting Patterns

Park, S.H., Leka, K.D., Kusano, K., Andries, J., Barnes, G., Bingham, S., Bloomfield, D.S., McCloskey, A.E., Delouille, V., Falconer, D., Gallagher, P.T., Georgoulis, M.K., Kubo, Y., Lee, K., Lee, S., Lobzin, V., Mun, J., Murray, S.A., Hamad Nageem, Tarek A.M., Qahwaji, Rami S.R., Sharpe, M., Steenburgh, R.A., Steward, G., Terkildsen, M. 21 March 2021 (has links)
No / A crucial challenge to successful flare prediction is forecasting periods that transition between "flare-quiet" and "flare-active." Building on earlier studies in this series in which we describe the methodology, details, and results of flare forecasting comparison efforts, we focus here on patterns of forecast outcomes (success and failure) over multiday periods. A novel analysis is developed to evaluate forecasting success in the context of catching the first event of flare-active periods and, conversely, correctly predicting declining flare activity. We demonstrate these evaluation methods graphically and quantitatively as they provide both quick comparative evaluations and options for detailed analysis. For the testing interval 2016-2017, we determine the relative frequency distribution of two-day dichotomous forecast outcomes for three different event histories (i.e., event/event, no-event/event, and event/no-event) and use it to highlight performance differences between forecasting methods. A trend is identified across all forecasting methods that a high/low forecast probability on day 1 remains high/low on day 2, even though flaring activity is transitioning. For M-class and larger flares, we find that explicitly including persistence or prior flare history in computing forecasts helps to improve overall forecast performance. It is also found that using magnetic/modern data leads to improvement in catching the first-event/first-no-event transitions. Finally, 15% of major (i.e., M-class or above) flare days over the testing interval were effectively missed due to a lack of observations from instruments away from the Earth-Sun line.

Page generated in 0.5505 seconds