1 |
“TNOs are Cool”: A survey of the trans-Neptunian regionKovalenko, I. D., Doressoundiram, A., Lellouch, E., Vilenius, E., Müller, T., Stansberry, J. 30 November 2017 (has links)
Context. Gravitationally bound multiple systems provide an opportunity to estimate the mean bulk density of the objects, whereas this characteristic is not available for single objects. Being a primitive population of the outer solar system, binary and multiple trans-Neptunian objects (TNOs) provide unique information about bulk density and internal structure, improving our understanding of their formation and evolution. Aims. The goal of this work is to analyse parameters of multiple trans-Neptunian systems, observed with Herschel and Spitzer space telescopes. Particularly, statistical analysis is done for radiometric size and geometric albedo, obtained from photometric observations, and for estimated bulk density. Methods. We use Monte Carlo simulation to estimate the real size distribution of TNOs. For this purpose, we expand the dataset of diameters by adopting the Minor Planet Center database list with available values of the absolute magnitude therein, and the albedo distribution derived from Herschel radiometric measurements. We use the 2-sample Anderson-Darling non-parametric statistical method for testing whether two samples of diameters, for binary and single TNOs, come from the same distribution. Additionally, we use the Spearman's coefficient as a measure of rank correlations between parameters. Uncertainties of estimated parameters together with lack of data are taken into account. Conclusions about correlations between parameters are based on statistical hypothesis testing. Results. We have found that the difference in size distributions of multiple and single TNOs is biased by small objects. The test on correlations between parameters shows that the effective diameter of binary TNOs strongly correlates with heliocentric orbital inclination and with magnitude difference between components of binary system. The correlation between diameter and magnitude difference implies that small and large binaries are formed by different mechanisms. Furthermore, the statistical test indicates, although not significant with the sample size, that a moderately strong correlation exists between diameter and bulk density.
|
2 |
GALACTIC EXTINCTION AND REDDENING FROM THE SOUTH GALACTIC CAP u -BAND SKY SURVEY: u -BAND GALAXY NUMBER COUNTS AND u − r COLOR DISTRIBUTIONLi, Linlin, Shen, Shiyin, Hou, Jinliang, Yuan, Fangting, Zhong, Jing, Zou, Hu, Zhou, Xu, Jiang, Zhaoji, Peng, Xiyan, Fan, Dongwei, Fan, Xiaohui, Fan, Zhou, He, Boliang, Jing, Yipeng, Lesser, Michael, Li, Cheng, Ma, Jun, Nie, Jundan, Wang, Jiali, Wu, Zhenyu, Zhang, Tianmeng, Zhou, Zhimin 30 January 2017 (has links)
We study the integral Galactic extinction and reddening based on the galaxy catalog of the South Galactic Cap u-band Sky Survey (SCUSS), where u-band galaxy number counts and u - r color distribution are used to derive the Galactic extinction and reddening respectively. We compare these independent statistical measurements with the reddening map of Schlegel et al. (SFD) and find that both the extinction and reddening from the number counts and color distribution are in good agreement with the SFD results at low extinction regions (E(B - V)(SFD) < 0.12 mag). However, for high extinction regions (E(B - V)(SFD) > 0.12 mag), the SFD map overestimates the Galactic reddening systematically, which can be approximated by a linear relation Delta E(B - V)= 0.43[ E(B - V)(SFD) - 0.12]. By combining the results from galaxy number counts and color distribution, we find that the shape of the Galactic extinction curve is in good agreement with the standard R-V = 3.1 extinction law of O'Donnell.
|
3 |
Galaxy cluster mass estimation from stacked spectroscopic analysisFarahi, Arya, Evrard, August E., Rozo, Eduardo, Rykoff, Eli S., Wechsler, Risa H. 21 August 2016 (has links)
We use simulated galaxy surveys to study: (i) how galaxy membership in redMaPPer clusters maps to the underlying halo population, and (ii) the accuracy of a mean dynamical cluster mass, M-sigma(lambda), derived from stacked pairwise spectroscopy of clusters with richness lambda. Using similar to 130 000 galaxy pairs patterned after the Sloan Digital Sky Survey (SDSS) redMaPPer cluster sample study of Rozo et al., we show that the pairwise velocity probability density function of central-satellite pairs with m(i) < 19 in the simulation matches the form seen in Rozo et al. Through joint membership matching, we deconstruct the main Gaussian velocity component into its halo contributions, finding that the top-ranked halo contributes similar to 60 per cent of the stacked signal. The halo mass scale inferred by applying the virial scaling of Evrard et al. to the velocity normalization matches, to within a few per cent, the log-mean halo mass derived through galaxy membership matching. We apply this approach, along with miscentring and galaxy velocity bias corrections, to estimate the log-mean matched halo mass at z = 0.2 of SDSS redMaPPer clusters. Employing the velocity bias constraints of Guo et al., we find aEuroln (M-200c)|lambda aEuro parts per thousand = ln (< M-30) + alpha(m) ln (lambda/30) with M-30 = 1.56 +/- 0.35 x 10(14) M-aS (TM) and alpha(m) = 1.31 +/- 0.06(stat) +/- 0.13(sys). Systematic uncertainty in the velocity bias of satellite galaxies overwhelmingly dominates the error budget.
|
4 |
Bayesian extreme quantile regression for hidden Markov modelsKoutsourelis, Antonios January 2012 (has links)
The main contribution of this thesis is the introduction of Bayesian quantile regression for hidden Markov models, especially when we have to deal with extreme quantile regression analysis, as there is a limited research to inference conditional quantiles for hidden Markov models, under a Bayesian approach. The first objective is to compare Bayesian extreme quantile regression and the classical extreme quantile regression, with the help of simulated data generated by three specific models, which only differ in the error term’s distribution. It is also investigated if and how the error term’s distribution affects Bayesian extreme quantile regression, in terms of parameter and confidence intervals estimation. Bayesian extreme quantile regression is performed by implementing a Metropolis-Hastings algorithm to update our parameters, while the classical extreme quantile regression is performed by using linear programming. Moreover, the same analysis and comparison is performed on a real data set. The results provide strong evidence that our method can be improved, by combining MCMC algorithms and linear programming, in order to obtain better parameter and confidence intervals estimation. After improving our method for Bayesian extreme quantile regression, we extend it by including hidden Markov models. First, we assume a discrete time finite state-space hidden Markov model, where the distribution associated with each hidden state is a) a Normal distribution and b) an asymmetric Laplace distribution. Our aim is to explore the number of hidden states that describe the extreme quantiles of our data sets and check whether a different distribution associated with each hidden state can affect our estimation. Additionally, we also explore whether there are structural changes (breakpoints), by using break-point hidden Markov models. In order to perform this analysis we implement two new MCMC algorithms. The first one updates the parameters and the hidden states by using a Forward-Backward algorithm and Gibbs sampling (when a Normal distribution is assumed), and the second one uses a Forward-Backward algorithm and a mixture of Gibbs and Metropolis-Hastings sampling (when an asymmetric Laplace distribution is assumed). Finally, we consider hidden Markov models, where the hidden state (latent variables) are continuous. For this case of the discrete-time continuous state-space hidden Markov model we implement a method that uses linear programming and the Kalman filter (and Kalman smoother). Our methods are used in order to analyze real interest rates by assuming hidden states, which represent different financial regimes. We show that our methods work very well in terms of parameter estimation and also in hidden state and break-point estimation, which is very useful for the real life applications of those methods.
|
5 |
Approximation methods and inference for stochastic biochemical kineticsSchnoerr, David Benjamin January 2016 (has links)
Recent experiments have shown the fundamental role that random fluctuations play in many chemical systems in living cells, such as gene regulatory networks. Mathematical models are thus indispensable to describe such systems and to extract relevant biological information from experimental data. Recent decades have seen a considerable amount of modelling effort devoted to this task. However, current methodologies still present outstanding mathematical and computational hurdles. In particular, models which retain the discrete nature of particle numbers incur necessarily severe computational overheads, greatly complicating the tasks of characterising statistically the noise in cells and inferring parameters from data. In this thesis we study analytical approximations and inference methods for stochastic reaction dynamics. The chemical master equation is the accepted description of stochastic chemical reaction networks whenever spatial effects can be ignored. Unfortunately, for most systems no analytic solutions are known and stochastic simulations are computationally expensive, making analytic approximations appealing alternatives. In the case where spatial effects cannot be ignored, such systems are typically modelled by means of stochastic reaction-diffusion processes. As in the non-spatial case an analytic treatment is rarely possible and simulations quickly become infeasible. In particular, the calibration of models to data constitutes a fundamental unsolved problem. In the first part of this thesis we study two approximation methods of the chemical master equation; the chemical Langevin equation and moment closure approximations. The chemical Langevin equation approximates the discrete-valued process described by the chemical master equation by a continuous diffusion process. Despite being frequently used in the literature, it remains unclear how the boundary conditions behave under this transition from discrete to continuous variables. We show that this boundary problem results in the chemical Langevin equation being mathematically ill-defined if defined in real space due to the occurrence of square roots of negative expressions. We show that this problem can be avoided by extending the state space from real to complex variables. We prove that this approach gives rise to real-valued moments and thus admits a probabilistic interpretation. Numerical examples demonstrate better accuracy of the developed complex chemical Langevin equation than various real-valued implementations proposed in the literature. Moment closure approximations aim at directly approximating the moments of a process, rather then its distribution. The chemical master equation gives rise to an infinite system of ordinary differential equations for the moments of a process. Moment closure approximations close this infinite hierarchy of equations by expressing moments above a certain order in terms of lower order moments. This is an ad hoc approximation without any systematic justification, and the question arises if the resulting equations always lead to physically meaningful results. We find that this is indeed not always the case. Rather, moment closure approximations may give rise to diverging time trajectories or otherwise unphysical behaviour, such as negative mean values or unphysical oscillations. They thus fail to admit a probabilistic interpretation in these cases, and care is needed when using them to not draw wrong conclusions. In the second part of this work we consider systems where spatial effects have to be taken into account. In general, such stochastic reaction-diffusion processes are only defined in an algorithmic sense without any analytic description, and it is hence not even conceptually clear how to define likelihoods for experimental data for such processes. Calibration of such models to experimental data thus constitutes a highly non-trivial task. We derive here a novel inference method by establishing a basic relationship between stochastic reaction-diffusion processes and spatio-temporal Cox processes, two classes of models that were considered to be distinct to each other to this date. This novel connection naturally allows to compute approximate likelihoods and thus to perform inference tasks for stochastic reaction-diffusion processes. The accuracy and efficiency of this approach is demonstrated by means of several examples. Overall, this thesis advances the state of the art of modelling methods for stochastic reaction systems. It advances the understanding of several existing methods by elucidating fundamental limitations of these methods, and several novel approximation and inference methods are developed.
|
6 |
A comparison of flare forecasting methods, I: results from the “All-clear” workshopBarnes, G., Leka, K.D., Schrijver, C.J., Colak, Tufan, Qahwaji, Rami S.R., Ashamari, Omar, Yuan, Y., Zhang, J., McAteer, R.T.J., Bloomfield, D.S., Higgins, P.A., Gallagher, P.T., Falconer, D.A., Georgoulis, M.K., Wheatland, M.S., Balch, C. 05 July 2016 (has links)
Yes / Solar flares produce radiation which can have an almost immediate effect on the near-Earth environ-
ment, making it crucial to forecast flares in order to mitigate their negative effects. The number of
published approaches to flare forecasting using photospheric magnetic field observations has prolifer-
ated, with varying claims about how well each works. Because of the different analysis techniques and
data sets used, it is essentially impossible to compare the results from the literature. This problem
is exacerbated by the low event rates of large solar flares. The challenges of forecasting rare events
have long been recognized in the meteorology community, but have yet to be fully acknowledged
by the space weather community. During the interagency workshop on “all clear” forecasts held in
Boulder, CO in 2009, the performance of a number of existing algorithms was compared on common
data sets, specifically line-of-sight magnetic field and continuum intensity images from MDI, with
consistent definitions of what constitutes an event. We demonstrate the importance of making such
systematic comparisons, and of using standard verification statistics to determine what constitutes
a good prediction scheme. When a comparison was made in this fashion, no one method clearly
outperformed all others, which may in part be due to the strong correlations among the parameters
used by different methods to characterize an active region. For M-class flares and above, the set of
methods tends towards a weakly positive skill score (as measured with several distinct metrics), with
no participating method proving substantially better than climatological forecasts. / This work is the outcome of many collaborative and cooperative efforts. The 2009 “Forecasting the All-Clear” Workshop in Boulder, CO was sponsored by NASA/Johnson Space Flight Center’s Space Radiation Analysis Group, the National Center for Atmospheric Research, and the NOAA/Space Weather Prediction Center, with additional travel support for participating scientists from NASA LWS TRT NNH09CE72C to NWRA. The authors thank the participants of that workshop, in particular Drs. Neal Zapp, Dan Fry, Doug Biesecker, for the informative discussions during those three crazy days, and NCAR’s Susan Baltuch and NWRA’s Janet Biggs for organizational prowess. Workshop preparation and analysis support was provided for GB, KDL by NASA LWS TRT NNH09CE72C, and NASA Heliophysics GI NNH12CG10C. PAH and DSB received funding from the European Space Agency PRODEX Programme, while DSB and MKG also received funding from the European Union’s Horizon 2020 research and in- novation programme under grant agreement No. 640216 (FLARECAST project). MKG also acknowledges research performed under the A-EFFort project and subsequent service implementation, supported under ESA Contract number 4000111994/14/D/MPR. YY was supported by the National Science Foundation under grants ATM 09-36665, ATM 07-16950, ATM-0745744 and by NASA under grants NNX0-7AH78G, NNXO-8AQ90G. YY owes his deepest gratitude to his advisers Prof. Frank Y. Shih, Prof. Haimin Wang and Prof. Ju Jing for long discussions, for reading previous drafts of his work and providing many valuable comments that improved the presentation and contents of this work. JMA was supported by NSF Career Grant AGS-1255024 and by a NMSU Vice President for Research Interdisciplinary Research Grant.
|
7 |
MAPS OF EVOLVING CLOUD STRUCTURES IN LUHMAN 16AB FROM HST TIME-RESOLVED SPECTROSCOPYKaralidi, Theodora, Apai, Dániel, Marley, Mark S., Buenzli, Esther 06 July 2016 (has links)
WISE J104915.57-531906.1 is the nearest brown dwarf binary to our solar system, consisting of two brown dwarfs in the L/T transition: Luhman 16A and B. In this paper, we present the first map of Luhman 16A, and maps of Luhman 16B for two epochs. Our maps were created by applying Aeolus, a Markov-Chain Monte Carlo code that maps the top-of-the-atmosphere (TOA) structure of brown dwarf and other ultracool atmospheres, to light curves of Luhman 16A and B using the Hubble Space Telescope's G141 and G102 grisms. Aeolus retrieved three or four spots in the TOA of Luhman 16A and B, with a surface coverage of 19%-32% (depending on an assumed rotational period of 5 hr or 8 hr) or 21%-38.5% (depending on the observational epoch), respectively. The brightness temperature of the spots of the best-fit models was similar to 200 K hotter than the background TOA. We compared our Luhman 16B map with the only previously published map. Interestingly, our map contained a large TOA spot that was cooler (Delta T similar to 51 K) than the background, which lay at low latitudes, in agreement with the previous Luhman 16B map. Finally, we report the detection of a feature reappearing in Luhman 16B light curves that are separated by tens of hundreds of rotations from each other. We speculate that this feature is related to TOA structures of Luhman 16B.
|
8 |
Data analysis techniques useful for the detection of B-mode polarisation of the Cosmic Microwave BackgroundWallis, Christopher January 2016 (has links)
Asymmetric beams can create significant bias in estimates of the power spectra from cosmic microwave background (CMB) experiments. With the temperature power spectrum many orders of magnitude stronger than the B-mode power spectrum any systematic error that couples the two must be carefully controlled and/or removed. In this thesis, I derive unbiased estimators for the CMB temperature and polarisation power spectra taking into account general beams and scan strategies. I test my correction algorithm on simulations of two temperature-only experiments and demonstrate that it is unbiased. I also develop a map-making algorithm that removes beam asymmetry bias at the map level. I demonstrate its implementation using simulations. I present two new map-making algorithms that create polarisation maps clean of temperature-to-polarisation leakage systematics due to differential gain and pointing between a detector pair. Where a half wave plate is used, I show that the spin-2 systematic due to differential ellipticity can also be removed using my algorithms. The first algorithm is designed to work with scan strategies that have a good range of crossing angles for each map pixel and the second for scan strategies that have a limited range of crossing angles. I demonstrate both algorithms by using simulations of time ordered data with realistic scan strategies and instrumental noise. I investigate the role that a scan strategy can have in mitigating certain common systematics by averaging systematic errors down with many crossing angles. I present approximate analytic forms for the error on the recovered B-mode power spectrum that would result from these systematic errors. I use these analytic predictions to search the parameter space of common satellite scan strategies to identify the features of a scan strategy that have most impact in mitigating systematic effects.
|
9 |
Quasar Photometric Redshifts and Candidate Selection: A New Algorithm Based on Optical and Mid-infrared Photometric DataYang, Qian, Wu, Xue-Bing, Fan, Xiaohui, Jiang, Linhua, McGreer, Ian, Green, Richard, Yang, Jinyi, Schindler, Jan-Torge, Wang, Feige, Zuo, Wenwen, Fu, Yuming 01 December 2017 (has links)
We present a new algorithm to estimate quasar photometric redshifts (photo-zs), by considering the asymmetries in the relative flux distributions of quasars. The relative flux models are built with multivariate Skew-t distributions in the multidimensional space of relative fluxes as a function of redshift and magnitude. For 151,392 quasars in the SDSS, we achieve a photo-z accuracy, defined as the fraction of quasars with the difference between the photo-z z(p) and the spectroscopic redshift z(s), vertical bar Delta z vertical bar=vertical bar z(s)-z(p)vertical bar/(1 + z(s)) within 0.1, of 74%. Combining the WISE W1 and W2 infrared data with the SDSS data, the photo-z accuracy is enhanced to 87%. Using the Pan-STARRS1 or DECaLS photometry with WISE W1 and W2 data, the photo-z accuracies are 79% and 72%, respectively. The prior probabilities as a function of magnitude for quasars, stars, and galaxies are calculated, respectively, based on (1) the quasar luminosity function, (2) the Milky Way synthetic simulation with the Besancon model, and (3) the Bayesian Galaxy Photometric Redshift estimation. The relative fluxes of stars are obtained with the Padova isochrones, and the relative fluxes of galaxies are modeled through galaxy templates. We test our classification method to select quasars using the DECaLS g, r, z, and WISE W1 and W2 photometry. The quasar selection completeness is higher than 70% for a wide redshift range 0.5 < z < 4.5, and a wide magnitude range 18 < r < 21.5 mag. Our photo-z regression and classification method has the potential to extend to future surveys. The photo-z code will be publicly available.
|
10 |
The Distribution and Ages of Star Clusters in the Small Magellanic Cloud: Constraints on the Interaction History of the Magellanic CloudsBitsakis, Theodoros, González-Lópezlira, R. A., Bonfini, P., Bruzual, G., Maravelias, G., Zaritsky, D., Charlot, S., Ramírez-Siordia, V. H. 26 January 2018 (has links)
We present a new study of the spatial distribution and ages of the star clusters in the Small Magellanic Cloud (SMC). To detect and estimate the ages of the star clusters we rely on the new fully automated method developed by Bitsakis et al. Our code detects 1319 star clusters in the central 18 deg(2) of the SMC we surveyed (1108 of which have never been reported before). The age distribution of those clusters suggests enhanced cluster formation around 240 Myr ago. It also implies significant differences in the cluster distribution of the bar with respect to the rest of the galaxy, with the younger clusters being predominantly located in the bar. Having used the same setup, and data from the same surveys as for our previous study of the LMC, we are able to robustly compare the cluster properties between the two galaxies. Our results suggest that the bulk of the clusters in both galaxies were formed approximately 300 Myr ago, probably during a direct collision between the two galaxies. On the other hand, the locations of the young (<= 50 Myr) clusters in both Magellanic Clouds, found where their bars join the H I arms, suggest that cluster formation in those regions is a result of internal dynamical processes. Finally, we discuss the potential causes of the apparent outside-in quenching of cluster formation that we observe in the SMC. Our findings are consistent with an evolutionary scheme where the interactions between the Magellanic Clouds constitute the major mechanism driving their overall evolution.
|
Page generated in 0.0945 seconds