• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 492
  • 44
  • 34
  • 19
  • 8
  • 8
  • 8
  • 8
  • 8
  • 8
  • 4
  • 3
  • 3
  • 3
  • 1
  • Tagged with
  • 646
  • 646
  • 595
  • 582
  • 142
  • 109
  • 105
  • 103
  • 65
  • 61
  • 57
  • 56
  • 52
  • 48
  • 47
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
51

BAYESIAN DECISION ANALYSIS OF A STATISTICAL RAINFALL/RUNOFF RELATION

Gray, Howard Axtell 10 1900 (has links)
The first purpose of this thesis is to provide a framework for the inclusion of data from a secondary source in Bayesian decision analysis as an aid in decision making under uncertainty. A second purpose is to show that the Bayesian procedures can be implemented on a computer to obtain accurate results at little expense in computing time. The state variables of a bridge design example problem are the unknown parameters of the probability distribution of the primary data. The primary source is the annual peak flow data for the stream being spanned. Information pertinent to the choice of bridge design is contained in rainfall data from gauges on the watershed but the distribution of this secondary data cannot be directly expressed in terms of the state variables. This study shows that a linear regression equation relating the primary and secondary data provides a means of using secondary data for finding the Bayes risk and expected opportunity loss associated with any particular bridge design and single new rainfall observation. The numerical results for the example problem indicate that the information gained from the rainfall data reduces the Bayes risk and expected opportunity loss and allows for a more economical structural design. Furthermore, the careful choice of the numerical methods employed reduces the computation time for these quantities to a level acceptable to any budget.
52

Motor Performance in the Context of Externally-imposed Payoffs

Neyedli, Heather Fern 20 March 2013 (has links)
Humans need to rapidly select movements that achieve their goal while avoiding negative outcomes. The processes leading to these decisions have only recently been studied. In the typical paradigm used to gain insight into the decision process, participants aim to a target circle that is overlapped by a penalty circle. They receive 100 points for hitting the target, and lose points for hitting the penalty region. Previous research has shown that participants generally behave like a rational decision maker by adapting their endpoint when the distance between the target and penalty circle and the penalty value changes (although some suboptimal selection has been noted). The overall purpose of the research reported in the present thesis was to determine if there are contexts when participants’ behaviour is suboptimal in a rapid, motor decision making tasks. Taken together, the results from four studies showed that: 1) participants require experience and feedback to aim to optimal locations; 2) participants often aimed closer to target center than optimal; and, 3) probability (represented through spatial parameters) has more influence over participant’s motor decisions than does the value of the penalty. Therefore, participants’ actions do not necessarily conform to a rational model of decision making; rather, there are consistent biases arising in the selection, planning, and execution of actions in specific contexts. These findings and conclusions can lead to a more descriptive understanding of motor decision making to provide information that is in addition to prescriptive models of rational behaviour.
53

Motor Performance in the Context of Externally-imposed Payoffs

Neyedli, Heather Fern 20 March 2013 (has links)
Humans need to rapidly select movements that achieve their goal while avoiding negative outcomes. The processes leading to these decisions have only recently been studied. In the typical paradigm used to gain insight into the decision process, participants aim to a target circle that is overlapped by a penalty circle. They receive 100 points for hitting the target, and lose points for hitting the penalty region. Previous research has shown that participants generally behave like a rational decision maker by adapting their endpoint when the distance between the target and penalty circle and the penalty value changes (although some suboptimal selection has been noted). The overall purpose of the research reported in the present thesis was to determine if there are contexts when participants’ behaviour is suboptimal in a rapid, motor decision making tasks. Taken together, the results from four studies showed that: 1) participants require experience and feedback to aim to optimal locations; 2) participants often aimed closer to target center than optimal; and, 3) probability (represented through spatial parameters) has more influence over participant’s motor decisions than does the value of the penalty. Therefore, participants’ actions do not necessarily conform to a rational model of decision making; rather, there are consistent biases arising in the selection, planning, and execution of actions in specific contexts. These findings and conclusions can lead to a more descriptive understanding of motor decision making to provide information that is in addition to prescriptive models of rational behaviour.
54

Bayesian approach to variable sampling plans for the Weibull distribution with censoring.

January 1996 (has links)
by Jian-Wei Chen. / Thesis (M.Phil.)--Chinese University of Hong Kong, 1996. / Includes bibliographical references (leaves 84-86). / Chapter Chapter 1 --- Introduction / Chapter 1.1 --- Introduction --- p.1 / Chapter 1.2 --- Bayesian approach to single variable sampling plan for the exponential distribution --- p.3 / Chapter 1.3 --- Outline of the thesis --- p.7 / Chapter Chapter 2 --- Single Variable Sampling Plan With Type II Censoring / Chapter 2.1 --- Model --- p.10 / Chapter 2.2 --- Loss function and finite algorithm --- p.13 / Chapter 2.3 --- Numerical examples and sensitivity analysis --- p.17 / Chapter Chapter 3 --- Double Variable Sampling Plan With Type II Censoring / Chapter 3.1 --- Model --- p.25 / Chapter 3.2 --- Loss function and Bayes risk --- p.27 / Chapter 3.3 --- Discretization method and numerical analysis --- p.33 / Chapter Chapter 4 --- Bayesian Approach to Single Variable Sampling Plans for General Life Distribution with Type I Censoring / Chapter 4.1 --- Model --- p.42 / Chapter 4.2 --- The case of the Weibull distribution --- p.47 / Chapter 4.3 --- The case of the two-parameter exponential distribution --- p.49 / Chapter 4.4 --- The case of the gamma distribution --- p.52 / Chapter 4.5 --- Numerical examples and sensitivity analysis --- p.54 / Chapter Chapter 5 --- Discussions / Chapter 5.1 --- Comparison between Bayesian variable sampling plans and OC curve sampling plans --- p.63 / Chapter 5.2 --- Comparison between single and double sampling plans --- p.64 / Chapter 5.3 --- Comparison of both models --- p.66 / Chapter 5.4 --- Choice of parameters and coefficients --- p.66 / Appendix --- p.78 / References --- p.84
55

Some aspects on Bayesian analysis of the LISREL model.

January 2002 (has links)
Tse Ka Ling Carol. / Thesis (M.Phil.)--Chinese University of Hong Kong, 2002. / Includes bibliographical references (leaves 72-76). / Abstracts in English and Chinese. / Chapter 1 --- Introduction --- p.1 / Chapter 1.1 --- The Factor Analysis Model --- p.1 / Chapter 1.2 --- Main Objectives --- p.2 / Chapter 1.2.1 --- Investigate the distribution of the estimated Factor Scores --- p.2 / Chapter 1.2.2 --- Propose an alternative method for getting the estimates of the LISREL model --- p.4 / Chapter 1.3 --- Summary --- p.4 / Chapter 2 --- Joint Bayesian Approach of the Factor Analysis Model --- p.6 / Chapter 2.1 --- Conditional Distribution --- p.7 / Chapter 2.1.1 --- Conditional distribution of Z given Y and θ --- p.7 / Chapter 2.1.2 --- Conditional distribution of θ given Y and Z --- p.7 / Chapter 2.2 --- Implementation of the Gibbs sampler for generating the random observations --- p.11 / Chapter 2.3 --- Bayesian Estimates and their Statistical Properties --- p.13 / Chapter 2.3.1 --- Estimates of unknown parameter --- p.13 / Chapter 2.3.2 --- Estimates of Factor Score --- p.14 / Chapter 3 --- Examine the distribution of the estimated factor scores --- p.15 / Chapter 3.1 --- The 1st Simulation Study --- p.15 / Chapter 3.2 --- The 2nd Simulation Study --- p.30 / Chapter 3.3 --- The 3rd Simulation Study --- p.31 / Chapter 4 --- An Alternative method for getting the parameter estimatesin the LISREL Model --- p.44 / Chapter 4.1 --- Full LISREL model --- p.44 / Chapter 4.2 --- Our proposed method --- p.46 / Chapter 4.3 --- Simulation Studies --- p.49 / Chapter 4.3.1 --- The 1st Simulation Study --- p.49 / Chapter 4.3.2 --- The 3rd Simulation Study --- p.50 / Chapter 4.4 --- Conclusion --- p.53 / Appendix --- p.56 / Bibliography --- p.72
56

Bayesian analysis in censored rank-ordered probit model with applications. / CUHK electronic theses & dissertations collection

January 2013 (has links)
在日常生活和科学研究中产生大量偏好数据,其反应一组被关注对象受偏好的程度。通常用排序数据或多元选择数据来记录观察结果。有时候关于两个对象的偏好没有明显强弱之分,导致排序产生节点,也就是所谓的删失排序。为了研究带有删失的排序数据,基于Thurstone的随机效用假设理论我们建立了一个对称贝叶斯probit模型。然而,参数识别是probit模型必须解决的问题,即确定一组潜在效用的位置和尺度。通常方法是选择其中一个对象为基,然后用其它对象的效用减去这个基的效用,最后我们关于这些效用差来建模。问题是,在用贝叶斯方法处理多元选择数据时,其预测结果对基的选择有敏感性,即选不同对象为基预测结果是不一样的。本文,我们虚构一个基,即一组对象偏好的平均。依靠这个基,我们为多元选择probit模型给出一个不依赖于对象标号的识别方法,即对称识别法。进一步,我们设计一种贝叶斯算法来估计这个模型。通过仿真研究和真实数据分析,我们发现这个贝叶斯probit模型被完全识别,而且消除通常识别法所存在的敏感性。接下来,我们把这个关于多元选择数据建立的probit模型推广到处理一般删失排序数据,即得到对称贝叶斯删失排序probit 模型。最后,我们用这个模型很好的分析了香港赌马数据。 / Vast amount of preference data arise from daily life or scientific research, where observations consist of preferences on a set of available objects. The observations are usually recorded by ranking data or multinomial data. Sometimes, there is not a clear preference between two objects, which will result in ranking data with ties, also called censored rank-ordered data. To study such kind of data, we develop a symmetric Bayesian probit model based on Thurstone's random utility (discriminal process) assumption. However, parameter identification is always an unavoidable problem for probit model, i.e., determining the location and scale of latent utilities. The standard identification method need to specify one of the utilities as a base, and then model the differences of the other utilities subtracted by the base. However, Bayesian predictions have been verified to be sensitive to specification of the base in the case of multinomial data. In this thesis, we set the average of the whole set of utilities as a base which is symmetric to any relabeling of objects. Based on this new base, we propose a symmetric identification approach to fully identify multinomial probit model. Furthermore, we design a Bayesian algorithm to fit that model. By simulation study and real data analysis, we find that this new probit model not only can be identifed well, but also remove sensitivities mentioned above. In what follows, we generalize this probit model to fit general censored rank-ordered data. Correspondingly, we get the symmetric Bayesian censored rank-ordered probit model. At last, we apply this model to analyze Hong Kong horse racing data successfully. / Detailed summary in vernacular field only. / Pan, Maolin. / Thesis (Ph.D.)--Chinese University of Hong Kong, 2013. / Includes bibliographical references (leaves 50-55). / Electronic reproduction. Hong Kong : Chinese University of Hong Kong, [2012] System requirements: Adobe Acrobat Reader. Available via World Wide Web. / Abstract also in Chinese. / Chapter 1 --- Introduction --- p.1 / Chapter 1.1 --- Overview --- p.2 / Chapter 1.1.1 --- The Ranking Model --- p.2 / Chapter 1.1.2 --- Discrete Choice Model --- p.4 / Chapter 1.2 --- Methodology --- p.7 / Chapter 1.2.1 --- Data Augmentation --- p.8 / Chapter 1.2.2 --- Marginal Data Augmentation --- p.8 / Chapter 1.3 --- An Outline --- p.9 / Chapter 2 --- Bayesian Multinomial Probit Model Based On Symmetric I-denti cation --- p.11 / Chapter 2.1 --- Introduction --- p.11 / Chapter 2.2 --- The MNP Model --- p.14 / Chapter 2.3 --- Symmetric Identification and Bayesian Analysis --- p.17 / Chapter 2.3.1 --- Symmetric Identification --- p.18 / Chapter 2.3.2 --- Bayesian Analysis --- p.21 / Chapter 2.4 --- Case Studies --- p.25 / Chapter 2.4.1 --- Simulation Study --- p.25 / Chapter 2.4.2 --- Clothes Detergent Purchases Data --- p.27 / Chapter 2.5 --- Summary --- p.29 / Chapter 3 --- Symmetric Bayesian Censored Rank-Ordered Probit Model --- p.30 / Chapter 3.1 --- Introduction --- p.30 / Chapter 3.2 --- Ranking Model --- p.33 / Chapter 3.2.1 --- Ranking Data --- p.33 / Chapter 3.2.2 --- Censored Rank-Ordered Probit Model --- p.35 / Chapter 3.2.3 --- Symmetrically Identified CROP Model --- p.36 / Chapter 3.3 --- Bayesian Analysis on Symmetrically Identified CROP Model --- p.37 / Chapter 3.3.1 --- Model Estimation --- p.38 / Chapter 3.4 --- Application: Hong Kong Horse Racing --- p.41 / Chapter 3.5 --- Summary --- p.44 / Chapter 4 --- Conclusion and Further Studies --- p.45 / Chapter A --- Prior for covariance matrix with trace augmented restriction --- p.47 / Chapter B --- Derivation of sampling intervals --- p.49 / Bibliography --- p.50
57

Clearinghouse Default Resources: Theory and Empirical Analysis

Cheng, Wan-Schwin Allen January 2017 (has links)
Clearinghouses insure trades. Acting as a central counterparty (CCP), clearinghouses consolidate financial exposures across multiple institutions, aiding the efficient management of counterparty credit risk. In this thesis, we study the decision problem faced by for-profit clearinghouses, focusing on primary economic incentives driving their determination of layers of loss-absorbing capital. The clearinghouse's loss-allocation mechanism, referred to as the default waterfall, governs the allocation and management of counterparty risk. This stock of loss-absorbing capital typically consists of initial margins, default funds, and the clearinghouse's contributed equity. We separate the overall decision problem into two distinct subproblems and study them individually. The first is the clearinghouse's choice of initial margin and clearing fee requirements, and the second involves its choice of resources further down the waterfall, namely the default funds and clearinghouse equity. We solve for the clearinghouse's equilibrium choices in both cases explicitly, and address the different economic roles they play in the clearinghouse's profit-maximization process. The models presented in this thesis show, without exception, that clearinghouse choices should depend not only on the riskiness of the cleared position but also on market and participants' characteristics such as default probabilities, fundamental value, and funding opportunity cost. Our results have important policy implications. For instance, we predict a counteracting force that dampens monetary easing enacted via low interest rate policies. When funding opportunity costs are low, our research shows that clearinghouses employ highly conservative margin and default funds, which tie up capital and credit. This is supported by the low interest rate environment following the financial crisis of 2007--08. In addition to low productivity growth and return on capital, major banks have chosen to accumulate large cash piles on their balance sheets rather than increase lending. In terms of systemic risk, our empirical work, joint with the U.S. Commodity Futures Trading Commission (CFTC), points to the possibility of destabilizing loss and margin spirals: in the terminology of Brunnermeier and Pedersen (2009), we argue that a major clearinghouse's behavior is consistent with that of an uninformed financier and that common shocks to credit quality can lead to tightening margin constraints.
58

Extremal martingales with applications and a Bayesian approach to model selection

Dümbgen, Moritz January 2015 (has links)
No description available.
59

Bayesian criterion-based model selection in structural equation models. / CUHK electronic theses & dissertations collection

January 2010 (has links)
Structural equation models (SEMs) are commonly used in behavioral, educational, medical, and social sciences. Lots of software, such as EQS, LISREL, MPlus, and WinBUGS, can be used for the analysis of SEMs. Also many methods have been developed to analyze SEMs. One popular method is the Bayesian approach. An important issue in the Bayesian analysis of SEMs is model selection. In the literature, Bayes factor and deviance information criterion (DIC) are commonly used statistics for Bayesian model selection. However, as commented in Chen et al. (2004), Bayes factor relies on posterior model probabilities, in which proper prior distributions are needed. And specifying prior distributions for all models under consideration is usually a challenging task, in particular when the model space is large. In addition, it is well known that Bayes factor and posterior model probability are generally sensitive to the choice of the prior distributions of the parameters. Furthermore the computational burden of Bayes factor is heavy. Alternatively, criterion-based methods are attractive in the sense that they do not require proper prior distributions in general, and the computation is quite simple. One of commonly used criterion-based methods is DIC, which however assumes the posterior mean to be a good estimator. For some models like the mixture SEMs, WinBUGS does not provide the DIC values. Moreover, if the difference in DIC values is small, only reporting the model with the smallest DIC value may be misleading. In this thesis, motivated by the above limitations of the Bayes factor and DIC, a Bayesian model selection criterion called the Lv measure is considered. It is a combination of the posterior predictive variance and bias, and can be viewed as a Bayesian goodness-of-fit statistic. The calibration distribution of the Lv measure, defined as the prior predictive distribution of the difference between the Lv measures of the candidate model and the criterion minimizing model, is discussed to help understanding the Lv measure in detail. The computation of the Lv measure is quite simple, and the performance is satisfactory. Thus, it is an attractive model selection statistic. In this thesis, the application of the Lv measure to various kinds of SEMs will be studied, and some illustrative examples will be conducted to evaluate the performance of the Lv measure for model selection of SEMs. To compare different model selection methods, Bayes factor and DIC will also be computed. Moreover, different prior inputs and sample sizes are considered to check the impact of the prior information and sample size on the performance of the Lv measure. In this thesis, when the performances of two models are similar, the simpler one is selected. / Li, Yunxian. / Adviser: Song Xinyuan. / Source: Dissertation Abstracts International, Volume: 72-04, Section: B, page: . / Thesis (Ph.D.)--Chinese University of Hong Kong, 2010. / Includes bibliographical references (leaves 116-122). / Electronic reproduction. Hong Kong : Chinese University of Hong Kong, [2012] System requirements: Adobe Acrobat Reader. Available via World Wide Web. / Electronic reproduction. Ann Arbor, MI : ProQuest Information and Learning Company, [200-] System requirements: Adobe Acrobat Reader. Available via World Wide Web. / Abstract also in Chinese.
60

Bayesian statistical analysis for nonrecursive nonlinear structural equation models. / CUHK electronic theses & dissertations collection

January 2007 (has links)
Keywords: Bayesian analysis, Finite mixture, Gibbs sampler, Langevin-Hasting sampler, MH sampler, Model comparison, Nonrecursive nonlinear structural equation model, Path sampling. / Structural equation models (SEMs) have been applied extensively to management, marketing, behavioral, and social sciences, etc for studying relationships among manifest and latent variables. Motivated by more complex data structures appeared in various fields, more complicated models have been recently developed. For the developments of SEMs, there is a usual assumption about the regression coefficient of the underlying latent variables. On themselves, more specifically, it is generally assumed that the structural equation modeling is recursive. However, in practice, nonrecursive SEMs are not uncommon. Thus, this fundamental assumption is not always appropriate. / The main objective of this thesis is to relax this assumption by developing some efficient procedures for some complex nonrecursive nonlinear SEMs (NNSEMs). The work in the thesis is based on Bayesian statistical analysis for NNSEMs. The first chapter introduces some background knowledge about NNSEMs. In chapter 2, Bayesian estimates of NNSEMs are given, then some statistical analysis topics such as standard error, model comparison, etc are discussed. In chapter 3, we develop an efficient hybrid MCMC algorithm to obtain Bayesian estimates for NNSEMs with mixed continuous and ordered categorical data. Also, some statistical analysis topics are discussed. In chapter 4, finite mixture NNSEMs are analyzed with the Bayesian approach. The newly developed methodologies are all illustrated with simulation studies and real examples. At last, some conclusion and discussions are included in Chapter 5. / Li, Yong. / "July 2007." / Adviser: Sik-yum Lee. / Source: Dissertation Abstracts International, Volume: 69-01, Section: B, page: 0398. / Thesis (Ph.D.)--Chinese University of Hong Kong, 2007. / Includes bibliographical references (p. 99-111). / Electronic reproduction. Hong Kong : Chinese University of Hong Kong, [2012] System requirements: Adobe Acrobat Reader. Available via World Wide Web. / Electronic reproduction. [Ann Arbor, MI] : ProQuest Information and Learning, [200-] System requirements: Adobe Acrobat Reader. Available via World Wide Web. / Abstracts in English and Chinese. / School code: 1307.

Page generated in 0.1055 seconds