• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 20
  • 6
  • 5
  • 4
  • 3
  • 1
  • 1
  • Tagged with
  • 46
  • 46
  • 14
  • 13
  • 12
  • 11
  • 7
  • 7
  • 7
  • 7
  • 7
  • 6
  • 6
  • 6
  • 6
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
1

Prior elicitation and variable selection for bayesian quantile regression

Al-Hamzawi, Rahim Jabbar Thaher January 2013 (has links)
Bayesian subset selection suffers from three important difficulties: assigning priors over model space, assigning priors to all components of the regression coefficients vector given a specific model and Bayesian computational efficiency (Chen et al., 1999). These difficulties become more challenging in Bayesian quantile regression framework when one is interested in assigning priors that depend on different quantile levels. The objective of Bayesian quantile regression (BQR), which is a newly proposed tool, is to deal with unknown parameters and model uncertainty in quantile regression (QR). However, Bayesian subset selection in quantile regression models is usually a difficult issue due to the computational challenges and nonavailability of conjugate prior distributions that are dependent on the quantile level. These challenges are rarely addressed via either penalised likelihood function or stochastic search variable selection (SSVS). These methods typically use symmetric prior distributions for regression coefficients, such as the Gaussian and Laplace, which may be suitable for median regression. However, an extreme quantile regression should have different regression coefficients from the median regression, and thus the priors for quantile regression coefficients should depend on quantiles. This thesis focuses on three challenges: assigning standard quantile dependent prior distributions for the regression coefficients, assigning suitable quantile dependent priors over model space and achieving computational efficiency. The first of these challenges is studied in Chapter 2 in which a quantile dependent prior elicitation scheme is developed. In particular, an extension of the Zellners prior which allows for a conditional conjugate prior and quantile dependent prior on Bayesian quantile regression is proposed. The prior is generalised in Chapter 3 by introducing a ridge parameter to address important challenges that may arise in some applications, such as multicollinearity and overfitting problems. The proposed prior is also used in Chapter 4 for subset selection of the fixed and random coefficients in a linear mixedeffects QR model. In Chapter 5 we specify normal-exponential prior distributions for the regression coefficients which can provide adaptive shrinkage and represent an alternative model to the Bayesian Lasso quantile regression model. For the second challenge, we assign a quantile dependent prior over model space in Chapter 2. The prior is based on the percentage bend correlation which depends on the quantile level. This prior is novel and is used in Bayesian regression for the first time. For the third challenge of computational efficiency, Gibbs samplers are derived and setup to facilitate the computation of the proposed methods. In addition to the three major aforementioned challenges this thesis also addresses other important issues such as the regularisation in quantile regression and selecting both random and fixed effects in mixed quantile regression models.
2

Joint synchronization of clock phase offset, skew and drift in reference broadcast synchronization (RBS) protocol

Sari, Ilkay 02 June 2009 (has links)
Time-synchronization in wireless ad-hoc sensor networks is a crucial piece of infrastructure. Thus, it is a fundamental design problem to have a good clock syn- chronization amongst the nodes of wireless ad-hoc sensor networks. Motivated by this fact, in this thesis, the joint maximum likelihood (JML) estimator for relative clock phase offset and skew under the exponential noise model for the reference broadcast synchronization protocol is formulated and found via a direct algorithm. The Gibbs Sampler is also proposed for joint estimation of relative clock phase offset and skew, and shown to provide superior performance compared to the JML-estimator. Lower and upper bounds for the mean-square errors (MSE) of the JML-estimator and the Gibbs Sampler are introduced in terms of the MSE of the uniform minimum variance unbiased estimator and the conventional best linear unbiased estimator, respectively. The suitability of the Gibbs Sampler for estimating additional unknown parameters is shown by applying it to the problem in which synchronization of clock drift is also needed.
3

Bayesian Inference of a Finite Population under Selection Bias

Xu, Zhiqing 01 May 2014 (has links)
Length-biased sampling method gives the samples from a weighted distribution. With the underlying distribution of the population, one can estimate the attributes of the population by converting the weighted samples. In this thesis, generalized gamma distribution is considered as the underlying distribution of the population and the inference of the weighted distribution is made. Both the models with known and unknown finite population size are considered. In the modes with known finite population size, maximum likelihood estimation and bootstrapping methods are attempted to derive the distributions of the parameters and population mean. For the sake of comparison, both the models with and without the selection bias are built. The computer simulation results show the model with selection bias gives better prediction for the population mean. In the model with unknown finite population size, the distributions of the population size as well as the sample complements are derived. Bayesian analysis is performed using numerical methods. Both the Gibbs sampler and random sampling method are employed to generate the parameters from their joint posterior distribution. The fitness of the size-biased samples are checked by utilizing conditional predictive ordinate.
4

A comprehensive analysis of extreme rainfall

Kagoda, Paulo Abuneeri 13 August 2008 (has links)
No description available.
5

Modeling the Performance of a Baseball Player's Offensive Production

Smith, Michael Ross 09 March 2006 (has links) (PDF)
This project addresses the problem of comparing the offensive abilities of players from different eras in Major League Baseball (MLB). We will study players from the perspective of an overall offensive summary statistic that is highly linked with scoring runs, or the Berry Value. We will build an additive model to estimate the innate ability of the player, the effect of the relative level of competition of each season, and the effect of age on performance using piecewise age curves. Using Hierarchical Bayes methodology with Gibbs sampling, we model each of these effects for each individual. The results of the Hierarchical Bayes model permit us to link players from different eras and to rank the players across the modern era of baseball (1900-2004) on the basis of their innate overall offensive ability. The top of the rankings, of which the top three were Babe Ruth, Lou Gehrig, and Stan Musial, include many Hall of Famers and some of the most productive offensive players in the history of the game. We also determine that trends in overall offensive ability in Major League Baseball exist based on different rule and cultural changes. Based on the model, MLB is currently at a high level of run production compared to the different levels of run production over the last century.
6

The Impact of Two-Rate Taxes on Construction in Pennsylvania

Plassmann, Florenz 10 July 1997 (has links)
The evaluation of policy-relevant economic research requires an ethical foundation. Classical liberal theory provides the requisite foundation for this dissertation, which uses various econometric tools to estimate the effects of shifting some of the property tax from buildings to land in 15 cities in Pennsylvania. Economic theory predicts that such a shift will lead to higher building activity. However, this prediction has been supported little by empirical evidence so far. The first part of the dissertation examines the effect of the land-building tax differential on the number of building permits that were issued in 219 municipalities in Pennsylvania between 1972 and 1994. For such count data a conventional analysis based on a continuous distribution leads to incorrect results; a discrete maximum likelihood analysis with a negative binomial distribution is more appropriate. Two models, a non-linear and a fixed effects model, are developed to examine the influence of the tax differential. Both models suggest that this influence is positive, albeit not statistically significant. Application of maximum likelihood techniques is computationally cumbersome if the assumed distribution of the data cannot be written in closed form. The negative binomial distribution is the only discrete distribution with a variance that is larger than its mean that can easily be applied, although it might not be the best approximation of the true distribution of the data. The second part of the dissertation uses a Markov Chain Monte Carlo method to examine the influence of the tax differential on the number of building permits, under the assumption that building permits are generated by a Poisson process whose parameter varies lognormally. Contrary to the analysis in the first part, the tax is shown to have a strong and significantly positive impact on the number of permits. The third part of the dissertation uses a fixed-effects weighted least squares method to estimate the effect of the tax differential on the value per building permit. The tax coefficient is not significantly different from zero. Still, the overall impact of the tax differential on the total value of construction is shown to be positive and statistically significant. / Ph. D.
7

A Random-Linear-Extension Test Based on Classic Nonparametric Procedures

Cao, Jun January 2009 (has links)
Most distribution free nonparametric methods depend on the ranks or orderings of the individual observations. This dissertation develops methods for the situation when there is only partial information about the ranks available. A random-linear-extension exact test and an empirical version of the random-linear-extension test are proposed as a new way to compare groups of data with partial orders. The basic computation procedure is to generate all possible permutations constrained by the known partial order using a randomization method similar in nature to multiple imputation. This random-linear-extension test can be simply implemented using a Gibbs Sampler to generate a random sample of complete orderings. Given a complete ordering, standard nonparametric methods, such as the Wilcoxon rank-sum test, can be applied, and the corresponding test statistics and rejection regions can be calculated. As a direct result of our new method, a single p-value is replaced by a distribution of p-values. This is related to some recent work on Fuzzy P-values, which was introduced by Geyer and Meeden in Statistical Science in 2005. A special case is to compare two groups when only two objects can be compared at a time. Three matching schemes, random matching, ordered matching and reverse matching are introduced and compared between each other. The results described in this dissertation provide some surprising insights into the statistical information in partial orderings. / Statistics
8

Performance Measurement in the eCommerce Industry.

Donkor, Simon 29 April 2003 (has links)
The eCommerce industry introduced new business principles, as well as new strategies for achieving these principles, and as a result some traditional measures of success are no longer valid. We classified and ranked the performance of twenty business-to-consumer eCommerce companies by developing critical benchmarks using the Balanced scorecard methodology. We applied a Latent class model, a statistical model along the Bayesian framework, to facilitate the determination of the best and worst performing companies. An eCommerce site's greatest asset is its customers, which is why some of the most valued and sophisticated metrics used today evolve around customer behavior. The results from our classification and ranking procedure showed that companies that ranked high overall also ranked comparatively well in the customer analysis ranking, For example, Amazon.com, one of the highest rated eCommerce companies with a large customer base ranked second in the critical benchmark developed towards measuring customer analysis. The results from our simulation also showed that the Latent class model is a good fit for the classification procedure, and it has a high classification rate for the worst and best performing companies. The resulting work offers a practical tool with the ability to identify profitable investment opportunities for financial managers and analysts.
9

漲跌幅限制下股價行為與財務指標受扭曲程度之研究 / The Impacts of Stock Price Limits on Security Price Behavior and Financial Risk Indices Measures

黃健榮, Huang, Je Rome Unknown Date (has links)
我國股市的價格漲跌幅限制已逾三十年的歷史,主管機關維持此一機制的訴求是避免股價波動過於激烈、抑制投機行為。惟停板限制可能帶來的影響,除直覺上的其造成投資者持股風險指標扭曲等問題。經探究中亦歸結出(一)其被引為技術指標、(二)其引致財務風險指標扭曲等問題。   經探究GMM、Gibbs Sampler、與Two-Limit-Tobit Model模型的優劣。本研究發現一般使用的GMM估計量並非不偏,雖然可以藉修正增加其效率性,但仍無法藉以衡量各種的停板影響;Gibbs Sampler則過於依賴特定的先驗分佈,有可能因此而造成偏誤;而目前使用Tobit Model的文獻大都忽略停板限制對股價的影響力,據以產生的估計值亦附含偏誤。   本研究所採樣本期間為79年1月3日至84年10月9日,使用模型為Two-Limit-Tobit Model。為求嚴謹,在使用之前做資料的處理,並利用CAAR來驗證模型的正確性。實證顯示,漲跌停板的設立顯著改變投資人行為,在停板之前本研究發現存在技術指標與標準差統計量的向上偏誤,進而可能誤導實業界財務決策或學術研究結論。 / Thsi Study explores how price limits, which have remained in Taiwan Securities Exchange for over thirty years, affects both security price behavior and security risk indices. Its empirical results add to our understanding of the social costs and benefits of price limits. The SEC has been advocating the merits of price limits, emphasing that they help eliminating speculative trades and reducing security price volatility. In contrast, it remains a popular thought that price limits increase investors’holding costs and risks. To empirically examine the effects of price limits in Taiwan, this papers adopts Two-Limit-Tobit Model, together with CAAR as an indicator for specification validity. My test results lend support to the notion of (1).Technical Indicator Effect immediately before the price limits are hit; (2).Enhancement Effect the day after. Moreover, price limits contribute to bias in both systematic risk and total risk estimates (namely, β and σ) and thus distort investment decisions.   This Study also contributes to the contemporary literature by examining the merits and limitations of GMM, Gibbs Sampler, and Two-Limit-Tobit Model. GMM estimator is subject to statistical bias. One way may gain efficiency via adjustment. And yet GMM ahs pitfalls in directly measuring the price limit effects; The major limitation of the Gibbs Sampler is its reliance on specific prior information and it may lead to bias. And most of the papers adopting Tobit Model simply input the original data into the program, ignoring the fact that price limit may make the following day price data may be contaiminated.
10

Bayesian Regression Inference Using a Normal Mixture Model

Maldonado, Hernan 08 August 2012 (has links)
In this thesis we develop a two component mixture model to perform a Bayesian regression. We implement our model computationally using the Gibbs sampler algorithm and apply it to a dataset of differences in time measurement between two clocks. The dataset has ``good" time measurements and ``bad" time measurements that were associated with the two components of our mixture model. From our theoretical work we show that latent variables are a useful tool to implement our Bayesian normal mixture model with two components. After applying our model to the data we found that the model reasonably assigned probabilities of occurrence to the two states of the phenomenon of study; it also identified two processes with the same slope, different intercepts and different variances. / McAnulty College and Graduate School of Liberal Arts; / Computational Mathematics / MS; / Thesis;

Page generated in 0.0454 seconds