• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 147
  • 37
  • 29
  • 18
  • 17
  • 15
  • 6
  • 4
  • 4
  • 2
  • 2
  • 2
  • 2
  • Tagged with
  • 285
  • 84
  • 81
  • 79
  • 78
  • 76
  • 72
  • 71
  • 68
  • 53
  • 51
  • 14
  • 14
  • 12
  • 12
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
71

Optimized DarkMatter Searches in Deep Observations of Segue 1 with MAGIC

Aleksic, Jelena 26 June 2013 (has links)
Existe una impresionante cantidad de evidencia, a todas las escalas, en favor de la existencia de la materia oscura - la componente del Universo invisible, no bariónica, que representa casi el 85% de su masa total. Aunque su existencia se postuló por primera vez hace más de 80 años, la naturaleza de la materia oscura sigue siendo hoy en día un misterio. Encontrar y entender la respuesta a esta pregunta es una de las tareas más importantes y emocionantes de la ciencia moderna. En el contexto de nuestra visión cosmológica actual del Universo, la materia oscura es considerada como un nuevo tipo de partícula, que interactúa débilmente con la materia ordinaria y la radiación. Además, esta nueva partícula es probablemente fría (no relativista), no bariónica, producida térmicamente en el Universo temprano y estable en escalas cosmológicas. Nuestra búsqueda de la partícula de materia oscura se lleva a cabo en paralelo con tres enfoques diferentes: la detección de la materia oscura producida en colisionadores, la detección directa de materia oscura por su interacciones en los experimentos subterráneos, y la búsqueda indirecta en el espacio de tierra y los observatorios de Partículas del modelo estándar creados en la aniquilación de materia oscura o la decadencia. Esta última estrategia es el tema de esta tesis. Los resultados presentados aquí son de búsquedas indirectas de la materia oscura en la galaxia esferiodal enana Segue 1, realizadas con los telescopios MAGIC de luz Cherenkov. El objetivo es reconocer los rayos gamma altamente energéticos producidos en la aniquilación o la desintegración de las partículas de materia oscura. Para ello, utilizamos algunas de las características espectrales únicas de los rayos gamma procedentes de dichos procesos. Un método de análisis específico, llamado el método de full likelihood, ha sido desarrollado para optimizar la la sensibilidad del análisis para las señales de materia oscura. El esquema de la Tesis se podría resumir de la siguiente manera: • Capítulo 1 presenta el paradigma de la materia oscura: cuáles son las evidencias astrofísicas y cosmológicas que sustentan la existencia de la materia oscura y cómo pueden conciliarse con nuestra actual imagen de la evolución del Universo. El capítulo termina con una revisión de algunos de los candidatos más motivados para partícula de materia oscura, con una discusión detallada sobre aquellos que son de especial interés para este trabajo. • Capítulo 2 está dedicado a describir las búsquedas de materia oscura. Se inicia con la presentación de las diferentes estrategias que emplean actualmente los diversos experimentos, incluyendo los resultados más destacaddos, para continuar con la descripción más detallada de las búsquedas indirectos. Se presta especial atención a las fotones de alta energía como mensajeros de búsqueda, contestando a las preguntas de qué señal se debe esperar, dónde buscarla y con qué instrumentos. • Capítulo 3 presenta el instrumento utilizado en este trabajo para las búsquedas de materia oscura - los telescopios MAGIC. El capítulo se divide en dos partes: la primera describe las propiedades técnicas del sistema; la segunda, la caracterización de su cadena de análisis estándar. • Capítulo 4 presenta la contribución científica original de este trabajo - el desarrollo del método de full likelihood, un método de análisis optimizado para el reconocimiento de las características espectrales que se esperan de los fotones originados por la materia oscura. En primer lugar, se presenta formalmente el método, y a continuación se procede a la caracterización de su comportamiento para un conjunto predefinido de condiciones, y se evalúa su rendimiento para determinadas formas espectrales. • Capítulo 5 presenta los resultados de este trabajo. En primer lugar, la motivación tras la selección de la galaxia Segue 1 como el candidato óptimo para las búsqueda de materia oscura con MAGIC. A continuación, se resumen los detalles de las observaciones realizadas y de la reducción de datos. Sigue el análisis de datos usando el método de full likelihood. Por último, el capítulo termina con los límites obtenidos en este trabajo para diferentes modelos de aniquilación de la materia oscura. Se presenta breve resumen de los puntos más relevantes de esta Tesis en las conclusiones. / There is an impressive amount of evidence, on all scales, favouring the existence of dark matter - an invisible, non-baryonic component of the Universe that accounts for almost 85% of its total mass density. Although its existence was for the first time postulated more than 80 years ago, the nature of dark matter still remains a mystery. Finding and understanding the answer to this question is one of the most important and exciting tasks of modern science. In the context of our current cosmological view of the Universe, dark matter is considered to be a new type of massive particle, that interacts weakly with ordinary matter and radiation. In addition, this new particle is most likely cold, non-baryonic, produced thermally in the early Universe and stable on cosmological scales. Our search for dark matter particle is carried out in parallel by three different approaches: detection of dark matter produced in colliders, direct detection of dark matter scattering off ordinary matter in underground experiments, and indirect search with space and ground-based observatories for Standard Model particles created in dark matter annihilation or decay. This last strategy is the subject of this Thesis. Results presented here are from indirect searches for dark matter in dwarf spheroidal galaxy Segue 1, carried out with the Imaging Air Cherenkov Telescopes called MAGIC. The objective is to recognize highly energetic photons, produced in annihilation or decay of dark matter particles, by some characteristic spectral features unique for gamma rays of dark matter origin. An dedicated analysis approach, called the full likelihood method, has been developed to optimize the sensitivity of the analysis for such dark matter signatures. The outline of the Thesis could be summarized as follows: • Chapter 1 introduces the dark matter paradigm: what are the astrophysical and cosmological evidence supporting the existence of dark matter, and how can they be reconciled with our current image of the evolution of the Universe. The Chapter ends with review of some of the best motivated candidates for dark matter particle, with detailed discussion about those that are of particular interest for this work. • Chapter 2 is devoted to dark matter searches. It begins with presentation of different strategies currently employed by various experiments and their most worth noting results, to continue with more detailed description of indirect searches. Special attention is devoted to the highly energetic photons as search messengers: what signal should be expected, where to look for it and with which instruments. • Chapter 3 introduces this work's tool for dark matter searches - the MAGIC Telescopes. Chapter is divided into two parts: one, describing the technical properties of the system, and the other, characterizing its standard analysis chain. • Chapter 4 presents the original scientific contribution of this work - the development of the full likelihood approach, an analysis method optimized for recognition of spectral features expected from photons of dark matter origin. First, the method is introduced, then characterized for the pre-defined sets of conditions and its performance evaluated for particular spectrum examples. • Chapter 5 brings the results of this work. First, the motivation behind the Segue 1 galaxy as the optimal dark matter candidate for searches with MAGIC is presented. Then, details of the carried observations and data reduction are summarized. This is followed by the full likelihood analysis of the data. Finally, this Chapter ends with the constraints obtained from this work for different models of dark matter annihilation decay. Brief summary of the most relevant points of this Thesis is presented in Conclusions.
72

Study of the Gamma Ray Horizon with MAGIC as a new method to perform cosmological measurements

Blanch Bigas, Oscar 05 November 2004 (has links)
Les mesures precises de la disminució extrínseca de l'espectre, que és a causa de l'absorció de raigs de gamma en el fons extragalàctic difús, de Nuclis Galàctics Actius distribuïts en una rang de redshift gran porten a una tècnica nova per realitzar mesures de les densitats cosmològiques. Aquesta tècnica nova per a la determinació dels paràmetres cosmològics té els següents trets característics:- És independent i es comporta diferentment d'altres tècniques actualment utilitzades.- No depèn de l'existència d'una candela estàndard independent del temps com en el cas de mesures realitzades amb Supernovae 1 A, encara que depèn de l'existència d'un fons de llum extragalàctica cosmològica des del ultra violeta fins al infraroig del qual, en la primera aproximació, se suposa que és uniforme i isòtrop en escales cosmològiques.- Utilitza Nuclis Galàctics Actius com fonts, i per això pot permetre l'estudi de l'expansió del nostre univers fins als redshifts observables més alts.Un cas realista, que utilitza fonts ja conegudes així com les característiques del Telescopi MAGIC obtingudes amb simulacions de Monte Carlo, permet obtenir, en el nivell estadístic, una determinació de la densitat de matèria en contra de la densitat d'energia fosca que està al nivell del millor resultat obtingut fins ara amb observacions combinades de Supernovae 1 A. El flux per als emissors de TeV extragalàctics ben establerts i per als millors candidats d'EGRET per emetre al rang d'energia que MAGIC cobrirà ha estat extrapolat. Això permet calcular la precisió sobre la determinació d'Horitzó de Raig Gamma, que estarà al nivell del 3-5 per cent amb un temps d'observació d'aproximadament 50 hores. Les fonts considerades cobreixen rang de redshift gran (0.031 a 1.8), proporcionant per això un bon mapa de l'horitzó de raig gamma com a funció del redshift. Per mesurar els paràmetres cosmològics s'ha realitzat un ajust multi paràmetres dels paràmetres cosmològics a l'horitzó de raig gamma com a funció del redshift. Els resultats podrien millorar dràsticament a causa de la descoberta esperada d'una abundància de fonts noves.També s'ha presentat una primera opinió de les incerteses sistemàtiques principals d'origen experimental i teòric. S'ha cregut que la principal incertesa experimental sistemàtica és l'escala d'energia global dels Telescopis Cerenkov, que té un efecte molt modest sobre els ajusts dels paràmetres cosmològics. La principal sistemàtic teòric ve de la suposició del fons de llum extragalàctic ultra violeta i infraroig i la seva evolució amb el redshift, que està limitant la capacitat per mesurar paràmetres cosmològics. El coneixement actual de la llum de fons extragalàctica fan que aquestes incerteses sistemàtiques siguin les dominants i per això s'han de tractar prudentment. / Precise measurements of the extrinsic cutoff, which is due to the absorption of gamma-rays in the diffuse extragalactic background, of Active Galactic Nuclei distributed in a large redshift range lead to a new technique to perform measurements of the cosmological densities. This new technique for the determination of the cosmological parameters has the following features:- It is independent and behaves differently from other techniques currently used.- It does not rely on the existence of a time-independent standard-candle as do the Supernovae 1A measurements, although it relies on the existence of a cosmological ultra-violet to infrared extragalactic background light which, in first approximation, is assumed to be uniform and isotropic at cosmological scales.- It uses Active Galactic Nuclei as sources, and therefore may allow the study of the expansion of our universe up to the highest observable redshifts.A realistic case using already known sources as well as the characteristics of the MAGIC Telescope from Monte Carlo simulations has been shown to provide, at the statistical level, a determination of the mass density against the dark energy density that is at the level of the best present results from the combined Supernovae 1A observations. The flux for the well established extragalactic TeV emitters and the best EGRET candidates to emit at the energy range MAGIC will cover has been extrapolated. This allows estimating the precision on the Gamma Ray Horizon determination, which will be at the percent level with observation times of about 50 hours. The considered sources cover a large redshift range (0.031 to 1.8), providing therefore a good mapping of the gamma ray horizon as a function of the redshift. To measure the cosmological parameters a multi-parameter fit of the cosmological parameters to the gamma ray horizon as a function of the redshift has been performed. The results could drastically improve due to the expected discovery of a plethora of new sources.A first estimation of the main systematic uncertainties of experimental and theoretical origin has also been presented. The main experimental systematic has been estimated to be the global energy scale of the Cerenkov Telescopes, which has a very modest effect on the cosmological parameter fits. The main theoretical systematic comes from the assumption of the ultra-violet to infrared Extragalactic Background Light and its redshift evolution, which is limiting the capability to measure cosmological parameters. The current knowledge of the Extragalactic Background Light leads these systematics to dominate and therefore they have to be carefully treated.
73

Implementing the Gaia Astrometric Solution

O'Mullane, William 23 March 2012 (has links)
As is the way with books in general this document is presented in the form of chapters (seven in number) devoted to individual topics relating to the overall topic of Gaia astrometric data processing. We progress logically from the satellite to the equations for the astrometry to the implementation of a software system to process Gaia observations. After this we look at a few key astrophysical issues for Gaia and explain tests which have been carried out, using the implementation, concerning these effects. A few appendices provide additional information. Here an overview paragraph is provided for each of the chapters: This introductory chapter Section 1 provides an overview of the work as well as an overview of the satellite hardware for the reader unfamiliar with Gaia. In Section 2 the equations underpinning the astrometric solution are explained and developed toward the algorithms actually coded in the system. Section 3 provides details of the Java software framework which hosts the equations previously described. The framework itself has been tuned to effectively process Gaia observations and is the main original content of the thesis. This system is known as the Astrometric Global Iterative Solution or AGIS. Having looked at the implementation, a few of the astrophysical effects and design decisions which influence the solution are described in Section 4. Although no data has yet been received from Gaia, extensive simulations have been performed in the Gaia community. Some of the AGIS tests relating to the astrophysical phenomena described in Section 4 are reported in Section 5. In this manner a demonstration of the effectiveness of AGIS is presented. A discussion and overview of the development approach adopted for A GIS is presented in Section 6. The eXtreme programming approach is particularly suited to science development and worked well for this project in the form presented. Brief conclusions are drawn in Section 7. Appendix A provides a primer on Quaternions which are used for attitude modelling. A complete list of the mind boggling acronyms used in this document appears in Appendix B. Finally some published papers are included in Appendix C. / Esta tesis presenta el marco numérico y computacional para la solución astrométrica Gaia. También cubre las consideraciones astrofísicas relativas a la solución y los aspectos relacionados con la gestión de la implementación de un sistema tan complejo.
74

New techniques for the analysis of the large scale structure of the Universe

Gil Marín, Héctor 03 May 2012 (has links)
The goal of this thesis is to study the large scale structure of the Universe from a theoretical point of view. In particular, the different chapters of this thesis focus on developing statistic tools to improve the understanding the contents of the Universe. In Chapter 1 a brief introduction of the basics of cosmology and large scale structure of the Universe is presented. This is the starting point for the thesis and provides a vital background material for all the following works developed in the other Chapters. Chapter 2 is concerned to the development of an extension of the Halo Model. We study the possibility of modifying the standard halo model dark matter haloes properties to depend not only on the halo mass but also on the halo environment. Both theoretical and observational studies indicate that properties of dark matter haloes, and specially the way they host galaxies, namely the Halo Occupation Distribution (HOD), depend not only on the mass of the halo host but also on its formation history. This formation history dependence may be related to the halo-surrounding dark matter field. In this work we present a theoretical model that allows to incorporate in a simple way this extra dependence on the environment. In this model the whole population of dark matter haloes is split in two depending on whether the haloes live in high-density environments or in low-density ones. We explore how the dark matter and the galaxy correlation function is affected by this dependence on the environment though the dark matter halo profile or the HOD respectively. In Chapter 3 we explore the possibility of improving the measurement of the growth factor using dark matter tracers. We compare the accuracy in the measurement of the growth factor using a single and two different biased dark matter tracers separately. We make use of realistic bias models, which include non-linear and stochastic parameters, and we calibrate them using dark matter simulations and using haloes of a certain binmass as tracers. We expect that using this method the sample variance could be reduced and the accuracy of the measurements improved as previous works have shown. Chapter 4 is concerned to exploring how possible deviations of General Relativity can be detected using the bispectrum technique. We work with a suit of cosmological simulations of modified gravitational action f (R) models, where cosmic acceleration is induced by a scalar field that acts as a fifth force on all forms of matter. The goal is to see how the bispectrum of the dark matter field on mildly non-linear scales is modified by the extra scalar field. In particular we are interested in see which is the effect on the bispectrum, when different gravity models present the same power spectrum at late times. In Chapter 5 we propose a new simple formula to compute the dark matter bispectrum in the moderate non-linear regime (k < 0.4 h/Mpc) and for redshifts z ≤ 1.5. Our method is inspired by the approach presented by Scoccimarro and Couchman (2001), but includes a modification of the original formulae and a prescription to better de- scribe the BAO oscillations. Using ΛCDM simulations we fit the free parameters of our model. We end up with a simple analytic formula that is able to predict accurately the bispectrum for a ΛCDM Universe including the effects of Baryon Acoustic Oscillations. The major conclusions of the works presented in the thesis are summarised and discussed in Chapter 6. Also the possible future projects are discussed. / La cosmologia és la disciplina que estudia l’Univers com a conjunt. L’objectiu és indagar i entendre l’origen, l’evolució, l’estructura i el destí final del cosmos, així com les lleis que el regeixen. Actualment la cosmologia es fonamenta en la teoria del Big Bang, que engloba l’astronomia observacional i la física de partícules. A cosmologia, el terme d’estructura a gran escala es refereix a la caracterització de la distribució de matèria i radiació a escales típicament superiors a 10Mpc (desenes de milions d’anys llum). Les missions científiques de mapeig i cartografiat del cel han proporcionat informació essencial sobre el contingut i les propietats d’aquesta estructura. S’ha determinat que l’organització de l’Univers a aquestes escales segueix un model jeràrquic amb una organització superior de supercúmuls i filaments. A escales superiors no s’ha trobat evidència de cap estructura continuada. A aquest fenomen se’l coneix com el Final de la Grandesa. L’objectiu d’aquesta tesi és l’estudi de l’estructura a gran escala de l’Univers des d’un punt de vista teòric. En particular els diferents capítols d’aquesta tesi se centren en desenvolupar eines estadístiques per millorar l’enteniment de la natura dels constituents de l’Univers. En el Capítol 1 es presenta una introducció als conceptes bàsics de la cosmologia actual amb èmfasi en l’estructura a gran escala de l’Univers. En el Capítol 2 es presenta una extensió al models d’halos clàssic on s’inclou una dependència dels halos amb l’entorn. Amb aquest tipus de models s’espera poder explicar millor com les galàxies es distribueixen al cosmos d’acord amb les seves propietats físiques. En el Capítol 3 es presenta un mètode per determinar paràmetres cosmològics tals com el factor de creixement. L’avantatge de la tècnica presentada aquí és que permet reduir l’efecte de la variància còsmica que domina quan estudiem les escales pròximes a la grandària de l’Univers observat. En el Capítol 4 s’utilitza la tècnica del bispectre per determinar com de diferent és la Relativitat General de les teories de gravetat modificada al nivell de la funció de correlació de tres punts. En el Capítol 5 presentem una fórmula analítica pel càlcul del bispectre de matèria fosca calibrada a partir de simulacions de N-cossos. Finalment en el Capítol 6 presentem les conclusions d’aquesta tesi i les perspectives futures. Esperem que els treballs i esforços realitzats, així com els resultats obtinguts en aquesta tesi sigui útils per futurs projectes científics. En particular, esperem que les tècniques que aquí es presenten combinades amb dades de missions científiques de cartografiat i mapeig de galàxies puguin ser útils per extreure informació rellevant sobre l’evolució i estructura del cosmos i que així puguin ajudar a desentrellar la natura i les propietats de la matèria i energia fosca.
75

Some observational and theoretical aspects of cosmic-ray diffusion

Cea del Pozo, Elsa de 22 July 2011 (has links)
La Tesis contiene ciertos estudios relacionados con la difusión de rayos cósmicos. Está dividida en dos partes, una describe los modelos sobre la fenomenología de difusión de rayos cósmicos, y otra presenta las observaciones realizadas usando el experimento MAGIC y simulaciones del futuro Array de Telescopios Cherenkov (CTA, por sus siglas en inglés). En la primera parte, se introduce la teoría general más aceptada sobre la difusión de rayos cósmicos. Se cree que los remanentes de supernova (SNR) son uno de los escenarios más probables de aceleración de rayos cósmicos, tanto en procesos leptónicos como hadrónicos. El mecanismo de aceleración de partículas en cada SNR se asume que es aceleración por choque difuso (diffusive shock acceleration). Para obtener confirmación observacional de la aceleración de protones y otros núcleos, y distinguirlos de la emisión leptónica, se deben aislar los efectos de los múltiples mensajeros producidos por partículas secundarias. Partiendo de ahí, se desarrolla un modelo sobre los alrededores del SNR IC443 que explica la fenomenología de alta energía: los rayos cósmicos escapan del remanente, los más energéticos alcanzan antes la nube molecular situada delante de la misma y los menos energéticos aún permanecen confinados a los restos del SNR. Los resultados contrastados con las últimas observaciones obtenidas de la fuente explican su aparente desplazamiento cuando se observa a alta y a muy alta energía. También se presenta un modelo multi-frecuencia y multi-mensajero (fotones de todo el espectro electromagnético y neutrinos) de la emisión difusa de la galaxia con un estallido de formación estelar M82. Las predicciones para rayos gamma se comparan con (y explican satisfactoriamente) las posteriores detecciones en el rango energético comprendido entre los giga- y los tera-electronvoltios de las galaxias M82 y NGC 253, realizadas por el satélite Fermi y los experimentos en tierra H.E.S.S. y VERITAS. En la segunda parte de la Tesis, se describe la técnica de detección de rayos gamma desde tierra a través de la radiación Cherenkov. Esta técnica es explotada, entre otros, por el experimento MAGIC. Algunas de las observaciones realizadas por la estudiante con este telescopio se presentan como parte de esta Tesis. En primer lugar, se muestran los límites superiores (upper limits) al flujo de rayos gamma obtenidos con MAGIC-I sobre dos fuentes detectadas por el experimento Milagro y que se corresponden con dos fuentes brillantes del satélite Fermi en la región del SNR G65.1+0.6. Se cree que puedan tratarse de dos púlsares que inyectan energía y partículas en la nebulosa pulsada que las rodea. También se presentan resultados preliminares de observaciones en estéreo (con los dos telescopios MAGIC) del SNR IC443. El número de horas obtenido resulta insuficiente para completar el estudio morfológico dependiente de la energía para el que se enfocaba la obtención de estos datos, pero nuevas observaciones están previstas para el futuro. Finalmente, se introducen por primera vez algunas simulaciones realizadas con el futuro CTA y ciertos estudios espectrales sobre particulares casos científicos. En concreto, dichos estudios se centraron en los objetos ya discutidos en el resto de la Tesis, como el SNR IC443, las galaxias con estallido de formación estelar M82 y NGC 253, y nubes moleculares iluminadas por rayos cósmicos escapados de SNRs cercanos. El observatorio CTA representa el futuro de las observaciones de rayos gamma desde tierra, y prevé que se unan las colaboraciones de todas las instalaciones de telescopios actuales. El rango de energías se verá ampliado, la sensibilidad aumentará un orden de magnitud y la resolución angular se mejorará respecto a los experimentos existentes hoy en día. Esta Tesis representa, pues, sólo el principio de lo que queda por venir. / This Thesis deals with certain aspects on cosmic-ray diffusion. It is divided in two parts, one describes phenomenological models of cosmic-ray diffusion, and the other presents observations taken with the MAGIC experiments and simulations of the future Cherenkov Telescope Array (CTA). In the first part, the generally accepted theory for cosmic-ray diffusion is introduced. Supernova remnants (SNRs) are believed to be the more likely scenarios of cosmic-ray acceleration, considering both hadronic and leptonic processes. The mechanism for particle acceleration in each SNR is assumed to be diffusive shock acceleration (DSA). To obtain the observational confirmation of proton and nuclei acceleration, and distinguish it from leptonic emission, the effects of multiple messengers produced by secondary particles must be isolated. Following this, a model for the neighborhood of the SNR IC443 is developed, explaining the high energy phenomenology: cosmic rays escape from the remnant, the most energetic ones reach first the molecular cloud located in front of it and the least energetic ones still remain confined on the shell of the SNR. The results are confronted with the latest observations that are obtained from this source. The apparent displacement between high and very high energy detected sources is explained thanks to this model. Moreover, a multi-frequency and multi-messenger model (i.e., photons from the whole electromagnetic spectrum and neutrinos) for the diffuse emission coming from the starburst galaxy M82 is presented. The gamma-ray predictions are compared to the posterior detections in the energy range between the giga- and the tera-electronvolts of the starburst galaxies M82 and NGC 253, observed by the satellite Fermi and the ground-based experiments H.E.S.S. and VERITAS. The model explains rather satisfactorily these detections at high and very high energy. In the second part of the Thesis, the technique for the gamma-ray detection at ground level through Cherenkov radiation is described. This Cherenkov technique is used in the MAGIC experiment, among others. Some of the observations taken by the student with this telescope facility are presented as part of this Thesis. First, the upper limits to the gamma-ray flux coming from two sources in the region of the SNR G65.1+0.6 when observed with MAGIC-I are shown. These two sources were previously detected by the Milagro experiment and are associated with two bright sources in the Fermi catalog. One of the possible explanations is that these sources are two pulsars powering the pulsar wind nebula that surrounds them. Furthermore, preliminar results of the stereo observations (using the two MAGIC telescopes) of the SNR IC443 are presented. The goal for these observations is performing an energy-dependent morphological study. So far, the obtained number of hours is not enough, although new observations are planned for the near future. Finally, some simulations for the future CTA are presented for the first time, together with several spectral studies regarding interesting scientific cases. In particular, those studies are focused on objects that have been already mentioned in this Thesis, like the SNR IC443 and the starburst galaxies M82 and NGC 253, and also on molecular clouds that are illuminated by cosmic rays which escaped from nearby SNRs. The CTA observatory represents the future of the ground-based gamma-ray observations, and it is likely to include every collaboration from the existing telescope facilities nowadays. The energy range will be widened, the sensitivity will be one order of magnitude improved and the angular resolution will be enhanced respect to the existing experiments up to now. Thus, the present Thesis is just the tip of the iceberg of what is yet to come.
76

Search for gamma-ray emission from supernova remnants with the Fermi/LAT and MAGIC telescopes

Reichardt Candel, Ignasi 31 October 2012 (has links)
Vegeu ircresum1de1.pdf
77

(gamma)-ray emission from regions of star formation: Theory and observations with the MAGIC Telescope

Domingo Santamaría, Eva 03 March 2006 (has links)
Es el propósito de esta tesis estudiar la posibilidad de que regiones con importante actividad en formación estelar sean fuentes de rayos gamma para los actuales y futuros detectores, ya sean detectores de rayos gamma con base en satélites como telescopios Cherenkov situados en la superficie terrestre.Tras una evaluación fenomenológica positiva de que la emisión de rayos gamma procedente de galaxias con elevada actividad en formación estelar (como las llamadas starburst o las galaxias ultra luminosas en el infrarrojo) esté cerca de la sensibilidad de los actuales detectores de rayos gamma, se han desarrollado y presentado modelos detallados de la emisión difusa multifrecuencia procedente de los dos mejores candidatos, NGC 253 y Arp 220. Se predice que ambas galaxias serán detectables para GLAST, el próximo satélite de rayos gamma con una sensibilidad sin precedentes, y por HESS y MAGIC, los actuales telescopios Cherenkov con mayor sensibilidad, en caso de que éstos dediquen suficiente tiempo de observación a estas galaxias.En la parte teórica de la tesis se describe también un modelo que propone la emisión de importantes flujos de rayos gamma en regiones de formación estelar dentro de nuestra Galaxia, como serían las asociaciones de estrellas jóvenes del tipo OB. El modelo considera la emisión de rayos gamma a energías cercanas a los TeV mediante interacciones hadrónicas en el sí de vientos estelares de algunas de las estrellas de la asociación, prediciendo a la vez que la emisión a bajas energías está sustancialmente suprimida debido al efecto de modulación que la población de rayos cósmicos primarios sufre al penetrar en el viento estelar. Se discuten brevemente los mejores candidatos entre las asociaciones OB galácticas conocidas.Finalmente, la tesis recoge un primer análisis de los datos tomados por el Telescopio MAGIC durante la observación de dos regiones de formación estelar. Por una parte, la galaxia ultra luminosa en el infrarrojo más cercana, Arp 220. Por otra parte, TeV J2032+4130, que aún hoy en día sigue siendo una fuente no identificada, y cuyo origen se ha relacionado con la poderosa asociación estelar Cygnus OB2 en repetidas ocasiones. Ninguna de las observaciones ha implicado detección, en consecuencia, se han impuesto límites superiores al flujo de rayos gamma procedente de las fuentes observadas. Sin embargo, a pesar de las pocas horas de observación incluidas en el presente análisis, los límites superiores impuestos por el Telescopio MAGIC para la fuente TeV J2032+4130 están prácticamente al nivel del flujo que el experimento HEGRA detectó para dicha fuente, de manera que un análisis extendido al conjunto completo de datos disponibles, así como futuras observaciones de más larga exposición con el Telescopio MAGIC, podrían proveer resultados interesantes. / Was the aim of this thesis to study the possibility that regions with important activity in star formation may appear as sources of gamma-rays for the current and near future gamma-ray detectors, both ground and space-satellite based. After a phenomenological positive evaluation of the fact that the gamma-ray emission from galaxies prone of star formation processes (as starburst or ultra luminous infrared galaxies) may be close to the flux sensitivities of the current gamma-ray telescopes, detailed models of the multiwavelength diffuse emission from the two best candidates, NGC 253 and Arp 220, have been presented. It is predicted that they will be detectable by GLAST, the next largest gamma-ray satellite, and by HESS and MAGIC, the current more sensitive Cherenkov Telescopes, in case enough observation time is devoted. On the other hand, within this thesis it is described a model which proposes the emission of important fluxes of gamma-rays from regions of star formation within our Galaxy, as the stellar association of young OB stars. The model considers the emission of gamma-rays close to TeV energies by hadronic interactions within the stellar winds of some of the stars of the association, predicting at the same time that the emission at lower energies is substantially suppressed due to the modulation effects that the incoming population of primary cosmic rays suffers when penetrating the winds. The best candidates among the galactic OB associations are briefly discussed.Finally, a first analysis of the data taken by the MAGIC Telescope when observing two of these regions of star formation is described. On one hand, the closest ultra luminous infrared galaxy, Arp 220. On the other, TeV J2032+4130, which is still an unidentified source whose origin has been several times related to the powerful stellar OB association Cygnus OB2. Any of both observations has implied detection, and upper limits to the gamma-ray flux have been imposed. However, although the few hours of observation included in the present analysis, the MAGIC upper limits for TeV J2032+4130 are nearly at the level of the flux detected by the HEGRA experiment, so an analysis extended to the complete set of data available as well as deeper observations of this source with the MAGIC Telescope can provide promising results.
78

Galaxy evolution: A new version of the Besançon Galaxy Model constrained with Tycho data

Czekaj, Maria A. 22 October 2012 (has links)
The understanding of the origin and evolution of the Milky Way is one of the primary goals of the Gaia mission (ESA, launch autumn 2013). In order to study and analyse fully the Gaia data it will be useful to have a Galaxy model able to test various hypothesis and scenarios of galaxy formation and evolution. Kinematic and star count data, together with the physical parameters of the stars - ages and metallicities-, will allow to characterize our galaxy's populations and, from that, the overall Galactic gravitational potential. One of the promising procedures to reach such goal is to optimize the present Population Synthesis models (Robin et al. (2003)) by fitting, through robust statistical techniques, the large and small scale structure and kinematics parameters that best will reproduce Gaia data. This PhD thesis was focused on the optimization of the structure parameters of the Milky Way Galactic disc. We improved the Besançon Galaxy Model and then by comparing the simulations to real data studied the process of Galaxy evolution. The Besançon Galaxy Model is a stellar population synthesis model, built over the last two decades in Besançon (Robin and Crézé(1986); Robin et al. (2003)). Until now the star production process in that model was based on the drawing from the so called Hess diagrams. Each Galaxy population had one such a diagram, which was calculated once given a particular Initial Mass Function (IMF), Star Formation Rate (SFR), evolutionary tracks and age-metallicity relation and since then remained fixed in the model. As that feature was not enabling to test any other scenario of Galaxy evolution, because none of the evolutionary parameters could be modified, it was one of the biggest weaknesses of the model. It has served us as a motivation to dedicate this PhD project to the construction of a new version of the model, which would be able to handle variations of the SFR, IMF, evolutionary tracks, atmosphere models among others. When the evolutionary parameters are changed one must repeat the process of accomplishing the dynamical self-consistency of the model as described in Bienayme et al. (1987). For that we have recalculated the Galactic gravitational potential for all new evolutionary scenarios, which have been tested. The second very important improvement of the model, which is delivered in this thesis, is the implementation of the stellar binarity. That is, the new version of Besançon Galaxy Model presented here is not any more a single star generator, but it considers binary systems maintaining constraints on the local mass density. This is an important change since binaries can account for about 50 % of the total stellar content of the Milky Way. Once the tool was developed we tested several possible combinations of IMF and SFR in the Solar Neighborhood and identified those which best reproduce the Local Luminosity Function and Tycho-2 data. We have accomplished an unprecedented task using the new version of the model, namely we have performed the whole sky comparisons for a magnitude limited sample in order to study the bright stars. The Tycho-2 catalogue turned out to be an ideal sample for that task due to its two important advantages, the homogeneity and completeness until VT ~ 11 mag. Different techniques and strategies were designed and applied when comparing the simulated and the real data. We have looked at small and specific Galactic directions and also performed general comparisons with a global sky coverage. In order to increase the efficiency of numerous simulations and comparisons, a processing pipeline based on C, Java and scripting programming languages has been developed and applied. It is a fully automated, portable and robust tool, allowing to split the work across several computational units. / La misión Gaia (ESA, 2013) revolucionará el conocimiento sobre el origen y la evolución de nuestra Galaxia. Una óptima explotación científica de sus datos requiere disponer de modelos que permitan contrastar hipótesis y escenarios sobre estos procesos de formación. En esta tesis hemos optimizado el modelo de síntesis de poblaciones estelares de Besançon, ampliamente utilizado por la comunidad internacional, centrándonos en la componente del disco delgado. Hemos diseñado, desarrollado, implementado y testeado una nueva estructura de generación de las estrellas que permite encontrar la mejor combinación de función inicial de masa (IMF) y ritmo de formación estelar (SFR) que ajusta a las observaciones. El código permite imponer la autoconsistencia dinámica, recalculando el potencial galáctico para cada nuevo escenario de evolución. También, por primera vez, se generan sistemas binarios bajo esta consistencia dinámica, marcada por la función de luminosidad observada en el entorno solar. Esta, junto con el catálogo Tycho, han sido los dos ingredientes observacionales clave para el ajuste entre modelo y observación. También, por primera vez, hemos conseguido un ajuste aceptable a los recuentos estelares de todo el cielo hasta V=11. Se han evaluado con rigor los efectos en los recuentos estelares derivados del uso de los modelos de atmosfera, de evolución estelar y de extinción interestelar así como de parámetros tan críticos como la masa dinámica del sistema galáctico. El ajuste de estos ingredientes usando el catálogo Tycho nos ha permitido confirmar, de una vez por todas, que la SFR en el disco galáctico no ha sido constante sino decreciente desde los inicios de la formación de esta estructura. En conclusión, esta tesis proporciona un nuevo código, optimizado y flexible en el uso de los ingredientes básicos, en el que se ha realizado una rigurosa evaluación y actualización de los ingredientes que lo componen.
79

Populating cosmological simulations with galaxies using the HOD model

Carretero Palacios, Jorge 01 February 2013 (has links)
El propósito de la tesis es presentar un método con el que construir catálogos de galaxias virtuales poblando simulaciones de N-cuerpos usando recetas basadas en el modelo ``Halo Occupation Distribution (HOD)''. Los catálogos generados cumplen una serie de propiedades observadas de las galaxias, tales como la función de luminosidad, el diagrama color-magnitud y la distribución espacial en función de la luminosidad y del color. Los datos observados provienen del ``Sloan Digital Sky Survey (SDSS)''. Se describe el marco teórico en el que se basa la producción de los catálogos, tanto el ``Halo model'' como el modelo HOD. Nuestros catálogos se construyen poblando con galaxias los catálogos de halos generados a partir de las simulaciones de N-cuerpos de materia oscura ``Marenostrum Institut de Ciències de l'Espai (MICE)''. Caracterizamos el catálogo de halos usado como input calculando la función de masa, la función de correlación a dos puntos y el bias lineal a gran escala de los halos. El modelo HOD proporciona recetas para poblar los halos con galaxias. Este modelo puede ser parametrizado de diversas maneras. En nuestro caso comenzamos generando catálogos de galaxias usando las recetas del modelo HOD propuestas por Skibba & Sheth en 2009. Debido a que el catálogo generado no se ajusta correctamente a las observaciones, investigamos de manera analítica el cálculo de dos parámetros del modelo HOD, Mmin y M1 (asumimos α=1), usando únicamente dos condiciones: la densidad media en número de galaxias y su bias. Luego calculamos los parámetros del modelo HOD que mejor ajustan la distribución espacial de las galaxias en función de la luminosidad mediante la construcción de un grid de catálogos que comprenden un amplio rango de tres parámetros del modelo HOD, Mmin, M1 y α. Para poder ajustar las observaciones es necesaria la introducción de nuevos ingredientes al modelo: la técnica ``SubHalo Abundance Matching (SHAM)'' y un perfil NFW modificado. Se crea un único catálogo que cumple al mismo tiempo la distribución espacial de galaxias para todas las luminosidades y todos los colores usando como input el catálogo de halos calculado del ``snapshot'' a redshift z=0 de la simulación ``MICE Grand Challenge''. El catálogo se construye siguiendo un nuevo algoritmo en el que se introducen algunas modificaciones: ``scatter'' en la relación entre la luminosidad de la galaxia central y la masa del halo, Mh, el parámetro M1 del modelo HOD se modela en función de Mh, y se incluye una tercera componente Gaussiana (en lugar de dos), para describir el diagrama color-magnitud. Se calcula la función de luminosidad y el bias de las galaxias lineal del catálogo generado. También se muestra el efecto que producen las velocidades peculiares de las galaxias en la distribución espacial de las galaxias, y la función de correlación angular en la escala de las oscilaciones acústicas de los bariones. Finalmente y brevemente se describen algunas de las actuales aplicaciones de los catálogos usados en los proyectos PAU y DES, en los que se incluyen características específicas para cada una de las galaxias como son las propiedades morfológicas, la magnitud en 42 filtros estrechos y la deformación provocada por el ``shear'' gravitacional. / This thesis presents a method to build mock galaxy catalogues by populating N-body simulations using prescriptions based upon the halo occupation distribution model (HOD). The catalogues are constructed to follow some global local properties of the galaxy population already observed, such as the luminosity function, the colourmagnitude diagram and the clustering as a function of luminosity and colour. The observed data constraints come from the Sloan Digital Sky Survey (SDSS). The theoretical framework in which the production of the catalogues is based on, the halo model and the HOD, are described. Our mock catalogues are built from halo catalogues extracted from the Marenostrum Institut de Ci`encies de l’Espai Nbody dark matter simulations (MICE). We characterize our input halo catalogues by computing their halo mass function, two-point correlation function and linear large scale halo bias. The HOD provides prescriptions of how galaxies populate haloes. The HOD can be parameterized in several ways. We start by following the HOD recipes given by Skibba & Sheth 2009 to generate galaxy catalogues. Since the luminosity function of the catalogue does not fit observations, we investigate an analytical derivation of two HOD parameters, Mmin and M1 (α is assumed to be 1), by only using two observed constraints: the galaxy number density and bias. Then, a grid of 600 mock galaxy catalogues that covers a wide range of values of the three HOD parameters, Mmin, M1 and α, is generated to obtain the best-fit HOD parameters that match the observed clustering of galaxy luminosity threshold samples. As we cannot match observations we introduce additional ingredients: the SubHalo Abundance Matching (SHAM) and a modified NFW density profile. A unique mock galaxy catalogue that follows at the same time the clustering at all luminosities and colours is produced using the halo catalogue extracted from the snapshot at z=0 of the MICE Grand Challenge run. The catalogue is built by following a new algorithm in which several modifications are introduced: scatter in the halo mass - central luminosity relation, the HOD parameter M1 is set as a function of Mh, and three Gaussian components (instead of only two) are included to describe the colour-magnitude distribution. A derivation of the luminosity function and the linear galaxy bias of the mock catalogue is shown. How galaxy velocity affects the galaxy clustering and an estimation of the angular correlation function at the BAO scale are presented too. Finally, different versions of the catalogue currently used in PAU and DES projects, which include specific characteristics such as shear information or 42 different magnitudes in narrow band filters and also morphological properties for each galaxy, are briefly described.
80

Study of Adaptive Optics Images by means of Multiscalar Transforms

Baena Gallé, Roberto 09 December 2013 (has links)
Adaptive optics (AO) systems are used to increase the spatial resolution achieved by ground-based telescopes, which are limited by the atmospheric motion of air layers above them. Therefore, the real cut-off frequency is extended closer to the theoretical diffraction limit of the telescope thus allowing more high-frequency information from the object to be present in the image. Nevertheless, although the goal of image reconstruction and deconvolution algorithms is basically the same (i.e., to recover a “real” diffracted limit image, free of noise, from the object), and since the correction of AO is not complete (i.e., the effective cut-off frequency achieved by AO is still below the theoretical diffraction limit), the simultaneous use of such deconvolution algorithms over dataset acquired with AO is possible and desirable to further enhance their contrast. On the other hand, multiresolution tools like the wavelet transform (WT) have been historically introduced into multiple deconvolution schemes improving their performance with respect to their non-wavelet counterparts. The ability of such transforms to separate image components depending on their frequency content results in solutions that are generally closer to the real object. On the other hand, AO community generally states that, due to the high variability of AO PSFs is necessary to update the PSF estimate during the reconstruction process. Hence, the use of blind and myopic deconvolution algorithms should be unavoidable and yields to better results than those obtained by the static-PSFs codes. Therefore, being the aforementioned paragraphs the current state-of-art of AO imaging, this thesis yields the following topics/goals: 1. The static-PSF algorithm AMWLE has been applied over binary systems simulated for the 3-m Shane telescope to evaluate the photometric accuracy of the reconstruction. Its performance is compared with the PSF-fitting algorithm StarFinder, commonly used by the AO community, as well as other algorithms like FITSTAR, PDF deconvolution and IDAC. Results shown that AWMLE is able to produce better results than StarFinder and FITSTAR, and very similar results with respect to the rest of codes, especially for high Strehl ratios (SR) and matched PSFs. 2. A new deconvolution algorithm called ACMLE, based on the curvelet transform (CT) and a maximum likelihood estimator (MLE), has been designed for the reconstruction of extended and/or elongated objects. ACMLE has been tested together with AMWLE and blind/myopic codes such as MISTRAL and IDAC over Saturn and galaxy simulated images for the 5-m. Hale telescope. It is shown that the performance in the presence of noise of the multiresolution static-PSF algorithms is better than myopic and blind algorithms, thus showing that the control of noise is as important as the update of the PSF estimate during the reconstruction process. 3. A unidimensional WT has been applied in the spectral deconvolution of integral field spectroscopy (IFS) datacubes for direct imaging of exoplanets with EPICS instrument, which will be installed at the forthcoming 39-m E-ELT telescope. When this approach is compared with the classical non-wavelet one, an improvement of 1 mag from angle separations equal to 73 mas is devised. Furthermore, detection of close-in planets, between 43 and 58 mas also benefit of the application of wavelets. The use of WT allows the APLC chronograph to obtain similar results with respect to the apodizer-only solution, especially with increasing Talbot length, thus showing that WT classify planet frequency components and chromatic aberrations in different scales. Preliminary results for HARMONI spectrograph are also shown. This thesis opens several lines of research that will be addressed in future: - The world of multiresolution transforms is extremely huge and has produced dozens of new mathematical tools. Among many other, it is worthwhile to mention the shearlet transform, which is an extension/improvement of CT, and the waveatom tool, which is intended to classify textures in the image. They should be studied and compared to establish their best performance and their best field of application over AO images. - Blind and myopic algorithms have proved their ability for large mismatches between the “real” PSF that has created the image and the PSF that is used as a first estimate in the reconstruction process. However, their performance in the presence of noise is highly affected. Hence, it is convenient to investigate if it is possible to introduce (and how to do it) multiresolution transforms into these algorithms to improve their behavior. - For the study of IFS datacubes, other father scaling functions with different shapes could be proposed, in particular, it can be considered a “dynamic” scaling function with the ability to modulate its shape according to the low frequency signal to be removed from the spaxel. This could potentially improved the final photometry of the detected faint source. Besides, the design of a dictionary of wavelets, which increase the decomposing resolution across the spaxel, instead of a single dyadic decomposition, can improve the photometric accuracy of detected planets as well as their spectral characterizations, taking full advantage of the information contained in the IFS datacubes.

Page generated in 0.0339 seconds