Spelling suggestions: "subject:"[een] QUANTILE REGRESSION"" "subject:"[enn] QUANTILE REGRESSION""
31 |
How Relations Between Early Reading Skills And 3rd-Grade Mathematics Outcomes Vary Across The Distribution: A Quantile Regression ApproachZhu, Zhixin 26 May 2023 (has links)
No description available.
|
32 |
Regularized and robust regression methods for high dimensional dataHashem, Hussein Abdulahman January 2014 (has links)
Recently, variable selection in high-dimensional data has attracted much research interest. Classical stepwise subset selection methods are widely used in practice, but when the number of predictors is large these methods are difficult to implement. In these cases, modern regularization methods have become a popular choice as they perform variable selection and parameter estimation simultaneously. However, the estimation procedure becomes more difficult and challenging when the data suffer from outliers or when the assumption of normality is violated such as in the case of heavy-tailed errors. In these cases, quantile regression is the most appropriate method to use. In this thesis we combine these two classical approaches together to produce regularized quantile regression methods. Chapter 2 shows a comparative simulation study of regularized and robust regression methods when the response variable is continuous. In chapter 3, we develop a quantile regression model with a group lasso penalty for binary response data when the predictors have a grouped structure and when the data suffer from outliers. In chapter 4, we extend this method to the case of censored response variables. Numerical examples on simulated and real data are used to evaluate the performance of the proposed methods in comparisons with other existing methods.
|
33 |
New regression methods for measures of central tendencyAristodemou, Katerina January 2014 (has links)
Measures of central tendency have been widely used for summarising statistical data, with the mean being the most popular summary statistic. However, in reallife applications it is not always the most representative measure of central location, especially when dealing with data which is skewed or contains outliers. Alternative statistics with less bias are the median and the mode. Median and quantile regression has been used in different fields to examine the effect of factors at different points of the distribution. Mode estimation, on the other hand, has found many applications in cases where the analysis focuses on obtaining information about the most typical value or pattern. This thesis demonstrates that mode also plays an important role in the analysis of big data, which is becoming increasingly important in many sectors of the global economy. However, mode regression has not been widely applied, even though there is a clear conceptual benefit, due to the computational and theoretical limitations of the existing estimators. Similarly, despite the popularity of the binary quantile regression model, computational straight forward estimation techniques do not exist. Driven by the demand for simple, well-found and easy to implement inference tools, this thesis develops a series of new regression methods for mode and binary quantile regression. Chapter 2 deals with mode regression methods from the Bayesian perspective and presents one parametric and two non-parametric methods of inference. Chapter 3 demonstrates a mode-based, fast pattern-identification method for big data and proposes the first fully parametric mode regression method, which effectively uncovers the dependency of typical patterns on a number of covariates. The proposed approach is demonstrated through the analysis of a decade-long dataset on the Body Mass Index and associated factors, taken from the Health Survey for England. Finally, Chapter 4 presents an alternative binary quantile regression approach, based on the nonlinear least asymmetric weighted squares, which can be implemented using standard statistical packages and guarantees a unique solution.
|
34 |
Methods for solving problems in financial portfolio construction, index tracking and enhanced indexationMezali, Hakim January 2013 (has links)
The focus of this thesis is on index tracking that aims to replicate the movements of an index of a specific financial market. It is a form of passive portfolio (fund) management that attempts to mirror the performance of a specific index and generate returns that are equal to those of the index, but without purchasing all of the stocks that make up the index. Additionally, we consider the problem of out-performing the index - Enhanced Indexation. It attempts to generate modest excess returns compared to the index. Enhanced indexation is related to index tracking in that it is a relative return strategy. One seeks a portfolio that will achieve more than the return given by the index (excess return). In the first approach, we propose two models for the objective function associated with choice of a tracking portfolio, namely; minimise the maximum absolute difference between the tracking portfolio return and index return and minimise the average of the absolute differences between tracking portfolio return and index return. We illustrate and investigate the performance of our models from two perspectives; namely, under the exclusion and inclusion of fixed and variable costs associated with buying or selling each stock. The second approach studied is that of using Quantile regression for both index tracking and enhanced indexation. We present a mixed-integer linear programming of these problems based on quantile regression. The third approach considered is on quantifying the level of uncertainty associated with the portfolio selected. The quantification of uncertainty is of importance as this provides investors with an indication of the degree of risk that can be expected as a result of holding the selected portfolio over the holding period. Here a bootstrap approach is employed to quantify the uncertainty of the portfolio selected from our quantile regression model.
|
35 |
A Tale Of Two Shocks : The Dynamics of Internal and External Shock Vulnerability in Real Estate Markets / En berättelse om två shocker : Internationella bostadsmarkadens känslighet för interna och externa chockerDahlström, Amanda, Ege, Oskar January 2016 (has links)
This paper examines the major potential drivers of five international real estate markets with a focus on pushing versus pulling effects. Using a quantile regression approach for the period 2000-2015 we examine the coefficients during three different market conditions: downward (bearish), normal (median) and upward (bullish). Using monthly data we look at five of the larger securitized property markets, namely, the US, UK, Australia, Singapore and Hong Kong. We find inconclusively that stock market volatility, as measured by the pushing factor VIXS&P500, best informs property market returns during bearish market environment. We also find that our pulling factors, money supply, treasury yields and unemployment presents theoretically grounded results in most cases with the expected signage. However, compared to the volatility index, pulling factors are not as uniformly suited for informing property market returns during bearish markets. We also find a range of insignificant results, which might be indicative of a suboptimal model specification and/or choice of estimation method.
|
36 |
THREE ESSAYS ON THE BLACK WHITE WAGE GAPOgunro, Nola 01 January 2009 (has links)
During the 1960s and early 1970s, the black – white wage gap narrowed significantly, but has remained constant since the late 1980s. The black – white wage gap in the recent period may reflect differences in human capital. A key component of human capital is labor market experience. The first chapter of this dissertation examines how differences in the returns and patterns of experience accumulation affect the black – white wage gap. Accounting for differences in the nature of experience accumulation does not explain the very large gap in wages between blacks and whites. Instead, the wage gap seems to be driven by constant differences between blacks and whites which may represent unobserved differences in skill or the effects of discrimination. The second chapter of the dissertation examines the role of discrimination in explaining the wage gap by asking whether statistical discrimination by employers causes the wages of never incarcerated blacks to suffer when the incarceration rate of blacks in an area increases. I find little evidence that black incarceration rates negatively affect the wages of never incarcerated blacks. Instead, macroeconomic effects in areas with higher incarceration rates play a more important role in explaining the variation in black wages. The third and final chapter of the dissertation examines the black – white wage gap and its determinants across the entire wage distribution to determine if the factors that are driving the wage gap vary across the distribution. I find that at the top of the conditional distribution, differences in the distribution of characteristics explain relatively more of the black – white wage gap than differences in the prices of characteristics. At the bottom of the conditional distribution, differences in the distribution of characteristics explain relatively more of the wage gap—although this finding varies across different specifications of the model.
|
37 |
STATISTICAL METHODS IN MICROARRAY DATA ANALYSISHuang, Liping 01 January 2009 (has links)
This dissertation includes three topics. First topic: Regularized estimation in the AFT model with high dimensional covariates. Second topic: A novel application of quantile regression for identification of biomarkers exemplified by equine cartilage microarray data. Third topic: Normalization and analysis of cDNA microarray using linear contrasts.
|
38 |
Topics in financial market risk modellingMa, Zishun January 2012 (has links)
The growth of the financial risk management industry has been motivated by the increased volatility of financial markets combined with the rapid innovation of derivatives. Since the 1970s, several financial crises have occurred globally with devastating consequences for financial and non-financial institutions and for the real economy. The most recent US subprime crisis led to enormous losses for financial and non-financial institutions and to a recession in many countries including the US and UK. A common lesson from these crises is that advanced financial risk management systems are required. Financial risk management is a continuous process of identifying, modeling, forecasting and monitoring risk exposures arising from financial investments. The Value at Risk (VaR) methodology has served as one of the most important tools used in this process. This quantitative tool, which was first invented by JPMorgan in its Risk-Metrics system in 1995, has undergone a considerable revolution and development during the last 15 years. It has now become one of the most prominent tools employed by financial institutions, regulators, asset managers and nonfinancial corporations for risk measurement. My PhD research undertakes a comprehensive and practical study of market risk modeling in modern finance using the VaR methodology. Two newly developed risk models are proposed in this research, which are derived by integrating volatility modeling and the quantile regression technique. Compared to the existing risk models, these two new models place more emphasis on dynamic risk adjustment. The empirical results on both real and simulated data shows that under certain circumstances, the risk prediction generated from these models is more accurate and efficient in capturing time varying risk evolution than traditional risk measures. Academically, the aim of this research is to make some improvements and extensions of the existing market risk modeling techniques. In practice, the purpose of this research is to support risk managers developing a dynamic market risk measurement system, which will function well for different market states and asset categories. The system can be used by financial institutions and non-financial institutions for either passive risk measurement or active risk control.
|
39 |
Order-statistics-based inferences for censored lifetime data and financial risk analysisSheng, Zhuo January 2013 (has links)
This thesis focuses on applying order-statistics-based inferences on lifetime analysis and financial risk measurement. The first problem is raised from fitting the Weibull distribution to progressively censored and accelerated life-test data. A new orderstatistics- based inference is proposed for both parameter and con dence interval estimation. The second problem can be summarised as adopting the inference used in the first problem for fitting the generalised Pareto distribution, especially when sample size is small. With some modifications, the proposed inference is compared with classical methods and several relatively new methods emerged from recent literature. The third problem studies a distribution free approach for forecasting financial volatility, which is essentially the standard deviation of financial returns. Classical models of this approach use the interval between two symmetric extreme quantiles of the return distribution as a proxy of volatility. Two new models are proposed, which use intervals of expected shortfalls and expectiles, instead of interval of quantiles. Different models are compared with empirical stock indices data. Finally, attentions are drawn towards the heteroskedasticity quantile regression. The proposed joint modelling approach, which makes use of the parametric link between the quantile regression and the asymmetric Laplace distribution, can provide estimations of the regression quantile and of the log linear heteroskedastic scale simultaneously. Furthermore, the use of the expectation of the check function as a measure of quantile deviation is discussed.
|
40 |
Quantile-based methods for prediction, risk measurement and inferenceAlly, Abdallah K. January 2010 (has links)
The focus of this thesis is on the employment of theoretical and practical quantile methods in addressing prediction, risk measurement and inference problems. From a prediction perspective, a problem of creating model-free prediction intervals for a future unobserved value of a random variable drawn from a sample distribution is considered. With the objective of reducing prediction coverage error, two common distribution transformation methods based on the normal and exponential distributions are presented and they are theoretically demonstrated to attain exact and error-free prediction intervals respectively. The second problem studied is that of estimation of expected shortfall via kernel smoothing. The goal here is to introduce methods that will reduce the estimation bias of expected shortfall. To this end, several one-step bias correction expected shortfall estimators are presented and investigated via simulation studies and compared with one-step estimators. The third problem is that of constructing simultaneous confidence bands for quantile regression functions when the predictor variables are constrained within a region is considered. In this context, a method is introduced that makes use of the asymmetric Laplace errors in conjunction with a simulation based algorithm to create confidence bands for quantile and interquantile regression functions. Furthermore, the simulation approach is extended to an ordinary least square framework to build simultaneous bands for quantiles functions of the classical regression model when the model errors are normally distributed and when this assumption is not fulfilled. Finally, attention is directed towards the construction of prediction intervals for realised volatility exploiting an alternative volatility estimator based on the difference of two extreme quantiles. The proposed approach makes use of AR-GARCH procedure in order to model time series of intraday quantiles and forecast intraday returns predictive distribution. Moreover, two simple adaptations of an existing model are also presented.
|
Page generated in 0.05 seconds