• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 194
  • 57
  • 25
  • 23
  • 19
  • 12
  • 7
  • 5
  • 4
  • 2
  • 2
  • 2
  • 1
  • 1
  • Tagged with
  • 375
  • 252
  • 51
  • 44
  • 43
  • 36
  • 35
  • 32
  • 32
  • 29
  • 29
  • 28
  • 27
  • 26
  • 26
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
71

Joint Quantile Disease Mapping for Areal Data

Alahmadi, Hanan H. 16 November 2021 (has links)
The statistical analysis based on the quantile method is more comprehensive, flexible, and not sensitive against outliers compared to the mean methods. The study of the joint disease mapping has usually focused on the mean regression. This means they study the correlation or the dependence between the means of the diseases by using standard regression. However, sometimes one disease limits the occurrence of another disease. In this case, the dependence between the two diseases will not be in the means but in the different quantiles; thus, the analyzes will consider a joint disease mapping of high quantile for one disease with low quantile of the other disease. In the proposed joint quantile model, the key idea is to link the diseases with different quantiles and estimate their dependence instead of connecting their means. The various components of this formulation are modeled by using the latent Gaussian model, and the parameters were estimated via R-INLA. Finally, we illustrate the model by analyzing the malaria and G6PD deficiency incidences in 21 African countries.
72

Essays on the Industrial Organization of Mortgage Markets

Luu, Hieu Duc January 2018 (has links)
Thesis advisor: Michael Grubb / This dissertation consists of two chapters on the industrial organization of mortgage markets in the United States.In the first chapter, titled “Consumer Search Costs in U.S. Mortgage Markets”, I focus on estimating the distribution of consumer search costs in the market for government-backed mortgages in the US during the period from September 2013 to March 2015. I adapt the Hortaçsu and Syverson (2004) search model to mortgage markets. I estimate the distribution of consumer search costs in each U.S. state using recent data on government-insured mortgages. I find that estimated search costs are large; a median borrower would face a search cost equivalent to about $40 in monthly repayment. At the state-level, search cost magnitude is related positively to household income and age and negatively to years of education. I solve counterfactual scenarios in order to study the relationship between search costs and welfare. Compared to the full information scenario, the presence of costly consumer search decreases social welfare by about $600 in monthly repayment per borrower. This decrease in welfare occurs because under costly search borrowers are matched with lower quality lenders and spend resources on searching. At the national level, this decrease corresponds to approximately $35 million per-month. Reductions in search costs would raise social welfare monotonically. A 10% reduction in search cost may raise social welfare by as much as $130 per borrower per month. These findings support recent policies that aim to reduce search costs of mortgage borrowers. In the second chapter, titled “Price Discrimination in U.S. Mortgage Markets”, I examine the existence of price discrimination generated by costly consumer search in the market for mortgages. I develop a stylized model of consumer search in mortgage markets where firms charge optimal prices that depend on borrowers' search cost level. The model produces testable restrictions on the conditional quantile function of observed transacted rates. Using the data on insured Federal Housing Agency loans where price variation is not driven by default risk, I run a quantile regression of transacted interest rates on a set of loan observables, including borrower's credit score, original principal balance, and loan-to-value ratio, among others. I find that predictions of the theoretical model are satisfied for all loan observables under consideration, and price discrimination created by costly consumer search is likely to exist in U.S. mortgage markets. / Thesis (PhD) — Boston College, 2018. / Submitted to: Boston College. Graduate School of Arts and Sciences. / Discipline: Economics.
73

Quantile Function Modeling and Analysis for Multivariate Functional Data

Agarwal, Gaurav 25 November 2020 (has links)
Quantile function modeling is a more robust, comprehensive, and flexible method of statistical analysis than the commonly used mean-based methods. More and more data are collected in the form of multivariate, functional, and multivariate functional data, for which many aspects of quantile analysis remain unexplored and challenging. This thesis presents a set of quantile analysis methods for multivariate data and multivariate functional data, with an emphasis on environmental applications, and consists of four significant contributions. Firstly, it proposes bivariate quantile analysis methods that can predict the joint distribution of bivariate response and improve on conventional univariate quantile regression. The proposed robust statistical techniques are applied to examine barley plants grown in saltwater and freshwater conditions providing interesting insights into barley’s responses, informing future crop decisions. Secondly, it proposes modeling and visualization of bivariate functional data to characterize the distribution and detect outliers. The proposed methods provide an informative visualization tool for bivariate functional data and can characterize non-Gaussian, skewed, and heavy-tailed distributions using directional quantile envelopes. The radiosonde wind data application illustrates our proposed quantile analysis methods for visualization, outlier detection, and prediction. However, the directional quantile envelopes are convex by definition. This feature is shared by most existing methods, which is not desirable in nonconvex and multimodal distributions. Thirdly, this challenge is addressed by modeling multivariate functional data for flexible quantile contour estimation and prediction. The estimated contours are flexible in the sense that they can characterize non-Gaussian and nonconvex marginal distributions. The proposed multivariate quantile function enjoys the theoretical properties of monotonicity, uniqueness, and the consistency of its contours. The proposed methods are applied to air pollution data. Finally, we perform quantile spatial prediction for non-Gaussian spatial data, which often emerges in environmental applications. We introduce a copula-based multiple indicator kriging model, which makes no distributional assumptions on the marginal distribution, thus offers more flexibility. The method performs better than the commonly used variogram approach and Gaussian kriging for spatial prediction in simulations and application to precipitation data.
74

Quantile-based generalized logistic distribution

Omachar, Brenda V. January 2014 (has links)
This dissertation proposes the development of a new quantile-based generalized logistic distribution GLDQB, by using the quantile function of the generalized logistic distribution (GLO) as the basic building block. This four-parameter distribution is highly flexible with respect to distributional shape in that it explains extensive levels of skewness and kurtosis through the inclusion of two shape parameters. The parameter space as well as the distributional shape properties are discussed at length. The distribution is characterized through its -moments and an estimation algorithm is presented for estimating the distribution’s parameters with method of -moments estimation. This new distribution is then used to fit and approximate the probability of a data set. / Dissertation (MSc)--University of Pretoria, 2014. / Statistics / MSc / Unrestricted
75

How Relations Between Early Reading Skills And 3rd-Grade Mathematics Outcomes Vary Across The Distribution: A Quantile Regression Approach

Zhu, Zhixin 26 May 2023 (has links)
No description available.
76

Predictability of Current and Future Multi-River discharges: Ganges, Brahmaputra, Yangtze, Blue Nile, and Murray-Darling Rivers

Jian, Jun 16 October 2007 (has links)
The aim of this study is to determine the predictability of river discharge in several major rivers on time scale varying from weeks to a century. We investigated predictability considering relationship between SST and Ganges and Brahmaputra River discharge. On seasonal time scales, statistically significant correlations are found between monthly equatorial Pacific SST and the summer Ganges discharge with lead times of 2-3 months due to oscillations of the ENSO phenomena. In addition, there are strong correlations in the southwest and northeast Pacific. The Brahmaputra discharge shows weaker relationships with tropical SST. Strong correlations relationships are found with SST in the Bay of Bengal but these are the result of very warm SSTs and exceptional Brahmaputra discharge during the summer of 1998. When this year is removed, relationships weaken everywhere except in the northwestern Pacific for the June and July discharge. Second goal is to project the behavior of future river discharge forced by the increasing greenhouse gases and aerosols from natural and anthropogenic sources. Three more rivers, the Yangtze, Blue Nile, and Murray-Darling rivers are considered. The original precipitation output from CMIP3 project has large inter-model variability, which limits the ability to quantify the regional precipitation or runoff trends. With a statistical Quantile-to-Quantile (Q-Q) technique, a mapping index was built to link each modeled precipitation and observational discharge. We also use the climatological annual cycle to choose the ¡°good¡± models. With the same indices, the future 21st century discharges of the first four rivers are simulated under different SRES scenarios. The Murray-Darling River basin does not have the similar seasonal cycle of discharge with modeled precipitations. We choose to project basin averaged precipitations instead. The Yangtze, Ganges, Brahmaputra River mean wet season discharges are projected to increase up to 15-25% at the end of the 21st century under A1B and A2. The risks of flooding also reach to a high level throughout the time. Inter-model deviations increase dramatically under all scenarios except for COMMIT. With large uncertainty, the Blue Nile River discharge and Murray-Darling River basin annual precipitation do not suggest a sign of change on multi-model mean.
77

Contributions à l'analyse de fiabilité structurale : prise en compte de contraintes de monotonie pour les modèles numériques / Contributions to structural reliability analysis : accounting for monotonicity constraints in numerical models

Moutoussamy, Vincent 13 November 2015 (has links)
Cette thèse se place dans le contexte de la fiabilité structurale associée à des modèles numériques représentant un phénomène physique. On considère que la fiabilité est représentée par des indicateurs qui prennent la forme d'une probabilité et d'un quantile. Les modèles numériques étudiés sont considérés déterministes et de type boîte-noire. La connaissance du phénomène physique modélisé permet néanmoins de faire des hypothèses de forme sur ce modèle. La prise en compte des propriétés de monotonie dans l'établissement des indicateurs de risques constitue l'originalité de ce travail de thèse. Le principal intérêt de cette hypothèse est de pouvoir contrôler de façon certaine ces indicateurs. Ce contrôle prend la forme de bornes obtenues par le choix d'un plan d'expériences approprié. Les travaux de cette thèse se concentrent sur deux thématiques associées à cette hypothèse de monotonie. La première est l'étude de ces bornes pour l'estimation de probabilité. L'influence de la dimension et du plan d'expériences utilisé sur la qualité de l'encadrement pouvant mener à la dégradation d'un composant ou d'une structure industrielle sont étudiées. La seconde est de tirer parti de l'information de ces bornes pour estimer au mieux une probabilité ou un quantile. Pour l'estimation de probabilité, l'objectif est d'améliorer les méthodes existantes spécifiques à l'estimation de probabilité sous des contraintes de monotonie. Les principales étapes d'estimation de probabilité ont ensuite été adaptées à l'encadrement et l'estimation d'un quantile. Ces méthodes ont ensuite été mises en pratique sur un cas industriel. / This thesis takes place in a structural reliability context which involves numerical model implementing a physical phenomenon. The reliability of an industrial component is summarised by two indicators of failure,a probability and a quantile. The studied numerical models are considered deterministic and black-box. Nonetheless, the knowledge of the studied physical phenomenon allows to make some hypothesis on this model. The original work of this thesis comes from considering monotonicity properties of the phenomenon for computing these indicators. The main interest of this hypothesis is to provide a sure control on these indicators. This control takes the form of bounds obtained by an appropriate design of numerical experiments. This thesis focuses on two themes associated to this monotonicity hypothesis. The first one is the study of these bounds for probability estimation. The influence of the dimension and the chosen design of experiments on the bounds are studied. The second one takes into account the information provided by these bounds to estimate as best as possible a probability or a quantile. For probability estimation, the aim is to improve the existing methods devoted to probability estimation under monotonicity constraints. The main steps built for probability estimation are then adapted to bound and estimate a quantile. These methods have then been applied on an industrial case.
78

Essays on Macro-Financial Linkages

de Rezende, Rafael B. January 2014 (has links)
This doctoral thesis is a collection of four papers on the analysis of the term structure of interest rates with a focus at the intersection of macroeconomics and finance. "Risk in Macroeconomic Fundamentals and Bond Return Predictability" documents that factors related to risks underlying the macroeconomy such as expectations, uncertainty and downside (upside) macroeconomic risks are able to explain variation in bond risk premia. The information provided is found to be, to a large extent, unrelated to that contained in forward rates and current macroeconomic conditions. "Out-of-sample bond excess returns predictability" provides evidence that macroeconomic variables, risks in macroeconomic outcomes as well as the combination of these different sources of information are able to generate statistical as well as economic bond excess returns predictability in an out-of-sample setting. Results suggest that this finding is not driven by revisions in macroeconomic data. The term spread (yield curve slope) is largely used as an indicator of future economic activity. "Re-examining the predictive power of the yield curve with quantile regression" provides new evidence on the predictive ability of the term spread by studying the whole conditional distribution of GDP growth. "Modeling and forecasting the yield curve by extended Nelson-Siegel class of models: a quantile regression approach" deals with yield curve prediction. More flexible Nelson-Siegel models are found to provide better fitting to the data, even when penalizing for additional model complexity. For the forecasting exercise, quantile-based models are found to overcome all competitors. / <p>Diss. Stockholm :  Stockholm School of Economics, 2014. Introduction together with 4 papers.</p>
79

Développement d’une méthodologie pour la garantie de performance énergétique associant la simulation à un protocole de mesure et vérification / Methodology for energy performance contracting based on simulation and a measurement protocol

Ligier, Simon 28 September 2018 (has links)
Les écarts communément observés entre les prévisions de consommations énergétiques et les performances réelles des bâtiments limitent le développement des projets de construction et de réhabilitation. La garantie de performance énergétique (GPE) a pour vocation d’assurer des niveaux de consommations maximaux et donc de sécuriser les investissements. Sa mise en place fait cependant face à plusieurs problématiques, notamment techniques et méthodologiques. Ces travaux de thèse se sont intéressés au développement d’une méthodologie pour la GPE associant les outils de simulation énergétique dynamique (SED) à un protocole de mesure et vérification. Elle repose d’abord sur la modélisation physico-probabiliste du bâtiment. Les incertitudes sur les paramètres physiques et techniques, et les variabilités des sollicitations dynamiques sont modélisées et propagées dans la SED. Un modèle de génération de données météorologiques variables a été développé. L’étude statistique des résultats de simulation permet d’identifier des modèles liant les consommations d’intérêt à des facteurs d’ajustement, caractéristiques des conditions d’exploitation. Les méthodes de régression quantile permettent de déterminer le quantile conditionnel des distributions et caractérisent donc conjointement la dépendance aux facteurs d’ajustement et le niveau de risque de l’engagement. La robustesse statistique de ces méthodes et le choix des meilleurs facteurs d’ajustement ont été étudiés, tout comme l’influence des incertitudes sur la mesure des grandeurs d’ajustement en exploitation. Leur impact est intégré numériquement en amont de la méthodologie. Cette dernière est finalement mise en œuvre sur deux cas d’étude : la rénovation de logements, et la construction de bureaux. / Discrepancies between ex-ante energy performance assessment and actual consumption of buildings hinder the development of construction and renovation projects. Energy performance contracting (EPC) ensures a maximal level of energy consumption and secures investment. Implementation of EPC is limited by technical and methodological problems.This thesis focused on the development of an EPC methodology that allies building energy simulation (BES), and measurement and verification (M&V) process anticipation. The building parameters’ uncertainties and dynamic loads variability are considered using a Monte-Carlo analysis. A model generating synthetic weather data was developed. Statistical studies of simulation results allow a guaranteed consumption limit to be evaluated according to a given risk. Quantile regression methods jointly capture the risk level and the relationship between the guaranteed energy consumption and external adjustment factors. The statistical robustness of these methods was studied as well as the choice of the best adjustment factors to consider. The latter will be measured during building operation. The impact of measurement uncertainties is statistically integrated in the methodology. The influence of M&V process accuracy is also examined. The complete EPC methodology is finally applied on two different projects: the refurbishment of a residential building and the construction of a high energy performance office building.
80

Estimation de mesures de risque pour des distributions elliptiques conditionnées / Estimation of risk measures for conditioned elliptical distributions

Usseglio-Carleve, Antoine 26 June 2018 (has links)
Cette thèse s'intéresse à l'estimation de certaines mesures de risque d'une variable aléatoire réelle Y en présence d'une covariable X. Pour cela, on va considérer que le vecteur (X,Y) suit une loi elliptique. Dans un premier temps, on va s'intéresser aux quantiles de Y sachant X=x. On va alors tester d'abord un modèle de régression quantile assez répandu dans la littérature, pour lequel on obtient des résultats théoriques que l'on discutera. Face aux limites d'un tel modèle, en particulier pour des niveaux de quantile dits extrêmes, on proposera une nouvelle approche plus adaptée. Des résultats asymptotiques sont donnés, appuyés par une étude numérique puis par un exemple sur des données réelles. Dans un second chapitre, on s'intéressera à une autre mesure de risque appelée expectile. La structure du chapitre est sensiblement la même que celle du précédent, à savoir le test d'un modèle de régression inadapté aux expectiles extrêmes, pour lesquels on propose une approche méthodologique puis statistique. De plus, en mettant en évidence le lien entre les quantiles et expectiles extrêmes, on s'aperçoit que d'autres mesures de risque extrêmes sont étroitement liées aux quantiles extrêmes. On se concentrera sur deux familles appelées Lp-quantiles et mesures d'Haezendonck-Goovaerts, pour lesquelles on propose des estimateurs extrêmes. Une étude numérique est également fournie. Enfin, le dernier chapitre propose quelques pistes pour traiter le cas où la taille de la covariable X est grande. En constatant que nos estimateurs définis précédemment étaient moins performants dans ce cas, on s'inspire alors de quelques méthodes d'estimation en grande dimension pour proposer d'autres estimateurs. Une étude numérique permet d'avoir un aperçu de leurs performances / This PhD thesis focuses on the estimation of some risk measures for a real random variable Y with a covariate vector X. For that purpose, we will consider that the random vector (X,Y) is elliptically distributed. In a first time, we will deal with the quantiles of Y given X=x. We thus firstly investigate a quantile regression model, widespread in the litterature, for which we get theoretical results that we discuss. Indeed, such a model has some limitations, especially when the quantile level is said extreme. Therefore, we propose another more adapted approach. Asymptotic results are given, illustrated by a simulation study and a real data example.In a second chapter, we focus on another risk measure called expectile. The structure of the chapter is essentially the same as that of the previous one. Indeed, we first use a regression model that is not adapted to extreme expectiles, for which a methodological and statistical approach is proposed. Furthermore, highlighting the link between extreme quantiles and expectiles, we realize that other extreme risk measures are closely related to extreme quantiles. We will focus on two families called Lp-quantiles and Haezendonck-Goovaerts risk measures, for which we propose extreme estimators. A simulation study is also provided. Finally, the last chapter is devoted to the case where the size of the covariate vector X is tall. By noticing that our previous estimators perform poorly in this case, we rely on some high dimensional estimation methods to propose other estimators. A simulation study gives a visual overview of their performances

Page generated in 0.0563 seconds