• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 113
  • 39
  • 23
  • 19
  • 7
  • 4
  • 4
  • 4
  • 3
  • 1
  • 1
  • 1
  • 1
  • Tagged with
  • 232
  • 232
  • 35
  • 29
  • 28
  • 26
  • 24
  • 23
  • 22
  • 22
  • 22
  • 22
  • 22
  • 21
  • 19
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
101

The Joint Modeling of Longitudinal Covariates and Censored Quantile Regression for Health Applications

Hu, Bo January 2022 (has links)
The overall theme of this thesis focuses on the joint modeling of longitudinal covariates and a censored survival outcome, where a survival outcome is modeled using a conditional quantile regression. In traditional joint modeling approaches, a survival outcome is usually parametrically modeled as a Cox regression. Censored quantile regressions can model a survival outcome without pre-specifying a parametric likelihood function or assuming a proportional hazard ratio. Existing censored quantile methods are mostly limited to fixed cross-sectional covariates, while in many longitudinal studies, researchers wish to investigate the associations between longitudinal covariates and a survival outcome. The first part considers the problem of joint modeling with a survival outcome under a mixture of censoring: left censoring, interval censoring or right censoring. We pose a linear mixed effect model for a longitudinal covariate and a conditional quantile regression for a censored survival outcome, assuming that a longitudinal covariate and a survival outcome are conditional independent on individual level random effects. We propose a Gibbs sampling approach as an extension of a censored quantile based data augmentation algorithm, to allow for a longitudinal covariate process. We also propose an iterative algorithm that alternately updates individual level random effects and model parameters, where a censored survival outcome is treated in the way of re-weighting. Both of our methods are illustrated by the application to the LEGACY Girls cohort Study to understand the influence of individual genetic profiles on the pubertal development (i.e., the onset of breast development) while adjusting for BMI growth trajectories. The second part considers the problem of joint modelling with a random right censoring survival outcome. We pose a linear mixed effect model for a longitudinal covariate and a conditional quantile regression for a censored survival outcome, assuming that a longitudinal covariate and a survival outcome are conditional independent on individual level random effects. We propose a Gibbs sampling approach as an extension of a censored quantile based data augmentation algorithm, to allow for a longitudinal covariate process. Theoretical properties for the resulting parameter estimates are established. We also propose an iterative algorithm that alternately updates individual level random effects and model parameters, where a censored survival outcome is treated in the way of re-weighting. Both of our methods are illustrated by the application to Mayo Clinic Primary Biliary Cholangitis Data to assess the effect of drug D-penicilamine on risk of liver transplantation or death, while controlling for age at registration and serBilir marker.
102

Deep Quantile Regression for Unsupervised Anomaly Detection in Time-Series

Tambuwal, Ahmad I., Neagu, Daniel 18 November 2021 (has links)
Yes / Time-series anomaly detection receives increasing research interest given the growing number of data-rich application domains. Recent additions to anomaly detection methods in research literature include deep neural networks (DNNs: e.g., RNN, CNN, and Autoencoder). The nature and performance of these algorithms in sequence analysis enable them to learn hierarchical discriminative features and time-series temporal nature. However, their performance is affected by usually assuming a Gaussian distribution on the prediction error, which is either ranked, or threshold to label data instances as anomalous or not. An exact parametric distribution is often not directly relevant in many applications though. This will potentially produce faulty decisions from false anomaly predictions due to high variations in data interpretation. The expectations are to produce outputs characterized by a level of confidence. Thus, implementations need the Prediction Interval (PI) that quantify the level of uncertainty associated with the DNN point forecasts, which helps in making better-informed decision and mitigates against false anomaly alerts. An effort has been made in reducing false anomaly alerts through the use of quantile regression for identification of anomalies, but it is limited to the use of quantile interval to identify uncertainties in the data. In this paper, an improve time-series anomaly detection method called deep quantile regression anomaly detection (DQR-AD) is proposed. The proposed method go further to used quantile interval (QI) as anomaly score and compare it with threshold to identify anomalous points in time-series data. The tests run of the proposed method on publicly available anomaly benchmark datasets demonstrate its effective performance over other methods that assumed Gaussian distribution on the prediction or reconstruction cost for detection of anomalies. This shows that our method is potentially less sensitive to data distribution than existing approaches. / Petroleum Technology Development Fund (PTDF) PhD Scholarship, Nigeria (Award Number: PTDF/ ED/PHD/IAT/884/16)
103

Statistical Methods for Variability Management in High-Performance Computing

Xu, Li 15 July 2021 (has links)
High-performance computing (HPC) variability management is an important topic in computer science. Research topics include experimental designs for efficient data collection, surrogate models for predicting the performance variability, and system configuration optimization. Due to the complex architecture of HPC systems, a comprehensive study of HPC variability needs large-scale datasets, and experimental design techniques are useful for improved data collection. Surrogate models are essential to understand the variability as a function of system parameters, which can be obtained by mathematical and statistical models. After predicting the variability, optimization tools are needed for future system designs. This dissertation focuses on HPC input/output (I/O) variability through three main chapters. After the general introduction in Chapter 1, Chapter 2 focuses on the prediction models for the scalar description of I/O variability. A comprehensive comparison study is conducted, and major surrogate models for computer experiments are investigated. In addition, a tool is developed for system configuration optimization based on the chosen surrogate model. Chapter 3 conducts a detailed study for the multimodal phenomena in I/O throughput distribution and proposes an uncertainty estimation method for the optimal number of runs for future experiments. Mixture models are used to identify the number of modes for throughput distributions at different configurations. This chapter also addresses the uncertainty in parameter estimation and derives a formula for sample size calculation. The developed method is then applied to HPC variability data. Chapter 4 focuses on the prediction of functional outcomes with both qualitative and quantitative factors. Instead of a scalar description of I/O variability, the distribution of I/O throughput provides a comprehensive description of I/O variability. We develop a modified Gaussian process for functional prediction and apply the developed method to the large-scale HPC I/O variability data. Chapter 5 contains some general conclusions and areas for future work. / Doctor of Philosophy / This dissertation focuses on three projects that are all related to statistical methods in performance variability management in high-performance computing (HPC). HPC systems are computer systems that create high performance by aggregating a large number of computing units. The performance of HPC is measured by the throughput of a benchmark called the IOZone Filesystem Benchmark. The performance variability is the variation among throughputs when the system configuration is fixed. Variability management involves studying the relationship between performance variability and the system configuration. In Chapter 2, we use several existing prediction models to predict the standard deviation of throughputs given different system configurations and compare the accuracy of predictions. We also conduct HPC system optimization using the chosen prediction model as the objective function. In Chapter 3, we use the mixture model to determine the number of modes in the distribution of throughput under different system configurations. In addition, we develop a model to determine the number of additional runs for future benchmark experiments. In Chapter 4, we develop a statistical model that can predict the throughout distributions given the system configurations. We also compare the prediction of summary statistics of the throughput distributions with existing prediction models.
104

The asymmetry of the New Keynesian Phillips Curve in the euro-area

Chortareas, G., Magkonis, Georgios, Panagiotidis, T. January 2012 (has links)
No / Using a two-stage quantile regression framework, we uncover significant asymmetries across quantiles for all coefficients in an otherwise standard New Keynesian Phillips Curve (NKPC) for the euro area. A pure NKPC specification accurately captures inflation dynamics at high inflation quantiles.
105

Advancing Credit Risk Analysis through Machine Learning Techniques : Utilizing Predictive Modeling to Enhance Financial Decision-Making and Risk Assessment

Lampinen, Henrik, Nyström, Isac January 2024 (has links)
Assessment of credit risk is crucial for the financial stability of banks, directly influencing their lending policies and economic resilience. This thesis explores advanced techniques for predictive modeling of Loss Given Default (LGD) and credit losses within major Swedish banks, with a focus on sophisticated methods in statistics and machine learning. The study specifically evaluates the effectiveness of various models, including linear regression, quantile regression, extreme gradient boosting, and ANN, to address the complexity of LGD’s bimodal distribution and the non-linearity in credit loss data. Key findings highlight the robustness of ANN and XGBoost in modeling complex data patterns, offering significant improvements over traditional linear models. The research identifies critical macroeconomic indicators—such as real estate prices, inflation, and unemployment rates—through an Elastic Net model, underscoring their predictive power in assessing credit risks.
106

A Comparison of Statistical Methods to Generate Short-Term Probabilistic Forecasts for Wind Power Production Purposes in Iceland / En jämförelse av statistiska metoder för attgenerera kortsiktiga probabilistiska prognoser för vindkraftsproduktion på Island

Jóhannsson, Arnór Tumi January 2022 (has links)
Accurate forecasts of wind speed and power production are of great value for wind power producers. In Southwest Iceland, wind power installations are being planned by various entities. This study aims to create optimal wind speed and wind power production forecasts for wind power production in Southwest Iceland by applying statistical post-processing methods to a deterministic HARMONIE-AROME forecast at a single point in space. Three such methods were implemented for a 22 month-long set of forecast-observation samples in 1h resolution: Temporal Smoothing (TS), Observational Distributions on Discrete Intervals (ODDI - a relatively simple classification algorithm) and Quantile Regression Forest (QRF - a relatively complicated Machine Learning Algorithm). Wind power forecasts were derived directly from forecasts of wind speed using an idealized power curve. Four different metrics were given equal weight in the evaluation of the methods: Root Mean Square Error (RMSE), Miss Rate of the 95-percent forecast interval (MR95), Mean Median Forecast Interval Width (MMFIW - a metric to measure the forecast sharpness) and Continuous Ranked Probability Score (CRPS). Of the three methods, TS performed inadequately while ODDI and QRF performed significantly better, and similarly to each other. Both ODDI and QRF predict wind speed and power production slightly more accurately than deterministic AROME in terms of their Root Mean Square Error. In addition to an overall evaluation of all three methods, ODDI and QRF were evaluated conditionally. The results indicate that QRF performs significantly better  than ODDI at forecasting wind speed and wind power at wind speeds above 13 m/s. Else, no strong discrepancies were found between their conditional performance. The results of this study are limited by a relatively scarce data set and correspondingly short time series. The results indicate that applying statistical post-processing methods of varying complexity to deterministic wind speed forecasts is a viable approach to gaining a probabilistic insight into the wind power potential at a given location.
107

Quantile regression in risk calibration

Chao, Shih-Kang 05 June 2015 (has links)
Die Quantilsregression untersucht die Quantilfunktion QY |X (τ ), sodass ∀τ ∈ (0, 1), FY |X [QY |X (τ )] = τ erfu ̈llt ist, wobei FY |X die bedingte Verteilungsfunktion von Y gegeben X ist. Die Quantilsregression ermo ̈glicht eine genauere Betrachtung der bedingten Verteilung u ̈ber die bedingten Momente hinaus. Diese Technik ist in vielerlei Hinsicht nu ̈tzlich: beispielsweise fu ̈r das Risikomaß Value-at-Risk (VaR), welches nach dem Basler Akkord (2011) von allen Banken angegeben werden muss, fu ̈r ”Quantil treatment-effects” und die ”bedingte stochastische Dominanz (CSD)”, welches wirtschaftliche Konzepte zur Messung der Effektivit ̈at einer Regierungspoli- tik oder einer medizinischen Behandlung sind. Die Entwicklung eines Verfahrens zur Quantilsregression stellt jedoch eine gro ̈ßere Herausforderung dar, als die Regression zur Mitte. Allgemeine Regressionsprobleme und M-Scha ̈tzer erfordern einen versierten Umgang und es muss sich mit nicht- glatten Verlustfunktionen besch ̈aftigt werden. Kapitel 2 behandelt den Einsatz der Quantilsregression im empirischen Risikomanagement w ̈ahrend einer Finanzkrise. Kapitel 3 und 4 befassen sich mit dem Problem der h ̈oheren Dimensionalit ̈at und nichtparametrischen Techniken der Quantilsregression. / Quantile regression studies the conditional quantile function QY|X(τ) on X at level τ which satisfies FY |X QY |X (τ ) = τ , where FY |X is the conditional CDF of Y given X, ∀τ ∈ (0,1). Quantile regression allows for a closer inspection of the conditional distribution beyond the conditional moments. This technique is par- ticularly useful in, for example, the Value-at-Risk (VaR) which the Basel accords (2011) require all banks to report, or the ”quantile treatment effect” and ”condi- tional stochastic dominance (CSD)” which are economic concepts in measuring the effectiveness of a government policy or a medical treatment. Given its value of applicability, to develop the technique of quantile regression is, however, more challenging than mean regression. It is necessary to be adept with general regression problems and M-estimators; additionally one needs to deal with non-smooth loss functions. In this dissertation, chapter 2 is devoted to empirical risk management during financial crises using quantile regression. Chapter 3 and 4 address the issue of high-dimensionality and the nonparametric technique of quantile regression.
108

Modelling Conditional Quantiles of CEE Stock Market Returns / Modelling Conditional Quantiles of CEE Stock Market Returns

Tóth, Daniel January 2015 (has links)
Correctly specified models to forecast returns of indices are important for in- vestors to minimize risk on financial markets. This thesis focuses on conditional Value at Risk modeling, employing flexible quantile regression framework and hence avoiding the assumption on the return distribution. We apply semi- parametric linear quantile regression (LQR) models with realized variance and also models with positive and negative semivariance which allows for direct modelling of the quantiles. Four European stock price indices are taken into account: Czech PX, Hungarian BUX, German DAX and London FTSE 100. The objective is to investigate how the use of realized variance influence the VaR accuracy and the correlation between the Central & Eastern and Western European indices. The main contribution is application of the LQR models for modelling of conditional quantiles and comparison of the correlation between European indices with use of the realized measures. Our results show that linear quantile regression models on one-step-ahead forecast provide better fit and more accurate modelling than classical VaR model with assumption of nor- mally distributed returns. Therefore LQR models with realized variance can be used as accurate tool for investors. Moreover we show that diversification benefits are...
109

O diferencial de notas entre as escolas públicas e privadas no Brasil: uma nova abordagem quantílica / The test scores differences between public and private schools in Brazil: a new quantile approach

Moraes, André Guerra Esteves de 14 June 2012 (has links)
Este trabalho busca trazer robustez aos resultados observados em estudos comparativos entre escolas públicas e privadas do Brasil, que indicam uma maior capacidade da rede particular de ensino em gerar qualidade educacional. Para isso, uma abordagem quantílica, baseada na seleção em observadas, foi realizada. Vale ressaltar que, ao contrário de outras abordagens, a realizada nesta dissertação tem inferência assintótica. A base de dados utilizada foi a do SAEB de 2005, para as provas de matemática de oitava série. Novamente foi evidenciada uma superioridade das escolas privadas, mesmo controlando para diversas covariadas de alunos, professores e escolas. Este fato fortalece a possibilidade de implantação de políticas de cupons para escolas particulares, apesar de haver a necessidade de estudos adicionais sobre o assunto. Em relação às covariadas que reduziriam a distância entre as distribuições de notas de alunos de escolas públicas e privadas, constatouse que fatores determinantes do grupo de alunos na escola e na sala (peer group effects) seriam os mais importantes. Isso corrobora com resultados de outros trabalhos que evidenciam a importância desses fatores para explicar a maior efetividade das escolas privadas em relação às escolas públicas. / This paper aims at bringing strength to the results observed in other studies that point out a larger ability of the private school network to generate quality education in Brazil. To achieve that result, this study applies a quantile approach based on the selection on observable variables. Note that unlike other approaches , the one applied in this dissertation has asymptotic inference. The data base used in this study was that of SAEB 2005 for math tests in the 8th grade. As in other studies, here again the superiority of the private schools was made evident, even though various students\', teachers\' and schools\' covariates are controlled. This result strengthens the possibility of a policy of quotas for private schools, although additional studies on the subject are necessary. In relation to the variables that would reduce the distance between the grades distributions of students on public and private schools, peer group effects were observed to be the more important ones. These results are similar to the ones observed in other studies that point out the importance of the peer group effects to explain the higher effectiveness of the private schools in comparison to the public schools.
110

Análise da política de dividendos: uma aplicação de regressão quantílica

Ströher, Jéferson Rodrigo 31 March 2015 (has links)
Submitted by Maicon Juliano Schmidt (maicons) on 2015-06-15T13:06:43Z No. of bitstreams: 1 Jéferson Rodrigo Ströher.pdf: 859725 bytes, checksum: e762d90a4c59cf0d13e4fcf3b22f8df3 (MD5) / Made available in DSpace on 2015-06-15T13:06:43Z (GMT). No. of bitstreams: 1 Jéferson Rodrigo Ströher.pdf: 859725 bytes, checksum: e762d90a4c59cf0d13e4fcf3b22f8df3 (MD5) Previous issue date: 2015-03-31 / Nenhuma / A política de dividendos é importante por envolver a tomada de decisão de distribuir ou não volumes de recursos financeiros através de dividendos ou de juros sobre capital próprio em percentuais diferenciados e formas de tributação diferenciadas no caso Brasileiro. O objetivo deste trabalho foi identificar quais os fatores impactam o índice Payout, como tamanho, liquidez, rentabilidade, endividamento, investimento, lucro, receita e concentração. Para analisar estes aspectos, utilizou-se a técnica de regressão quantílica, com dados da base da Economática e empresas representadas na BM&FBovespa no período de 2009 a 2013, compreendendo 3.073 observações. As estimativas permitem concluir que: a) há uma relação positiva entre tamanho da empresa e a dependente em todos os quantis; b) A variável liquidez foi relacionada positivamente com a variável payout nos quantis 0.5 e 0.75; c) a rentabilidade apresentou relação positiva a partir do quantil 0.5; d) uma relação negativa do endividamento a partir do quantil 0.5; e) uma relação negativa nos quartis 0.75 e 0.9 do investimento; f) O lucro líquido apresentou uma relação negativa no quartil 0.5; g) A receita apresentou uma relação positiva na mediana e no quartil 0.9; h) A concentração não apresentou significância estatística a 1%, no entanto verifica-se uma relação positiva no 0.75 e 0.9 e negativa nos outros quartis; i) Duas dummy’s sendo uma financeira que apresentou relação positiva no quartil 0.1 a 0.5 e a dummy de empresas que distribuíram proventos mesmo com prejuízo que apresentou uma associação negativa e cada vez mais forte. Todas as relações descritas mostram como a variável payout variou de acordo com modificações das variáveis descritas acima. / The dividend policy is importante because it covers the decision-making to distribute or not volumes of financial resources through dividendo or of interest over own capital in differentiate percentuals and tributation differentiate options in the Brazilian case. The objective of this work was to identify which factors impact the payout index, as size, liquidity, rentability, debt, investment, profit, revenue and concentration. To analize these aspects, the quantile regression technique was used, with data from the Economatica base and enterprises represented at BM&F Bovespa in the period from 2009 to 2013, comprehending 3.073 observations. The estimatives allow to conclude that: a) there is a positive ralation between enterprise size and the dependent variable in all the quantiles; b) the liquidity variable had a positive relation with the payout variable in the 0.5 and 0.75 quantile;c)the renatibility presented a positive relation fom the quantile 0,5 on;d)negative relation from the 0.5 quantile on; e) a negative relation in the quantile 0.75 and 0.9 with investment;f)the net profit presented a negative relation in the quantile 0.5;g)the revenue presented a positive relation in the median and 0.9 quantile;h)the concentration didn’t presented statistic significance at 1%, but there was verifyed a positive relation in the 0.75 and 0.9 quantile and negative in the others;i)Two dummys, one finantial that presented a positive relation in the quantile 0.1 and 0.5 and the dummy of the enterprises that distribute proceeds even with losses had a negative and stronger association with the variable payout. All the relations described show how the variable payout had varyed according to the modifications of the described variables above.

Page generated in 0.1468 seconds