• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 518
  • 53
  • 47
  • 32
  • 12
  • 12
  • 7
  • 7
  • 5
  • 5
  • 5
  • 5
  • 5
  • 5
  • 3
  • Tagged with
  • 771
  • 771
  • 601
  • 588
  • 135
  • 116
  • 101
  • 89
  • 68
  • 64
  • 61
  • 60
  • 60
  • 56
  • 54
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
151

Some developments of local quasi-likelihood estimation and optimal Bayesian sampling plans for censored data. / CUHK electronic theses & dissertations collection / Digital dissertation consortium

January 1999 (has links)
by Jian Wei Chen. / "May 1999." / Thesis (Ph.D.)--Chinese University of Hong Kong, 1999. / Includes bibliographical references (p. 178-180). / Electronic reproduction. Hong Kong : Chinese University of Hong Kong, [2012] System requirements: Adobe Acrobat Reader. Available via World Wide Web. / Electronic reproduction. Ann Arbor, MI : ProQuest Information and Learning Company, [200-] System requirements: Adobe Acrobat Reader. Available via World Wide Web. / Mode of access: World Wide Web.
152

Topics in Computational Bayesian Statistics With Applications to Hierarchical Models in Astronomy and Sociology

Sahai, Swupnil January 2018 (has links)
This thesis includes three parts. The overarching theme is how to analyze structured hierarchical data, with applications to astronomy and sociology. The first part discusses how expectation propagation can be used to parallelize the computation when fitting big hierarchical bayesian models. This methodology is then used to fit a novel, nonlinear mixture model to ultraviolet radiation from various regions of the observable universe. The second part discusses how the Stan probabilistic programming language can be used to numerically integrate terms in a hierarchical bayesian model. This technique is demonstrated on supernovae data to significantly speed up convergence to the posterior distribution compared to a previous study that used a Gibbs-type sampler. The third part builds a formal latent kernel representation for aggregate relational data as a way to more robustly estimate the mixing characteristics of agents in a network. In particular, the framework is applied to sociology surveys to estimate, as a function of ego age, the age and sex composition of the personal networks of individuals in the United States.
153

Selected Legal Applications for Bayesian Methods

Cheng, Edward K. January 2018 (has links)
This dissertation offers three contexts in which Bayesian methods can address tricky problems in the legal system. Chapter 1 offers a method for attacking case publication bias, the possibility that certain legal outcomes may be more likely to be published or observed than others. It builds on ideas from multiple systems estimation (MSE), a technique traditionally used for estimating hidden populations, to detect and correct case publication bias. Chapter 2 proposes new methods for dividing attorneys' fees in complex litigation involving multiple firms. It investigates optimization and statistical approaches that use peer reports of each firm's relative contribution to estimate a "fair" or consensus division of the fees. The methods proposed have lower informational requirements than previous work and appear to be robust to collusive behavior by the firms. Chapter 3 introduces a statistical method for classifying legal cases by doctrinal area or subject matter. It proposes using a latent space approach based on case citations as an alternative to the traditional manual coding of cases, reducing subjectivity, arbitrariness, and confirmation bias in the classification process.
154

A Bayesian Approach to the Understanding of Exoplanet Populations and the Origin of Life

Chen, Jingjing January 2018 (has links)
The study of extrasolar planets, or exoplanets for short, has developed rapidly over the last decade. While we have spent much effort building both ground-based and space telescopes to search for exoplanets, it is even more important that we use the observational data wisely to understand them. Exoplanets are of great interest to both astronomers and the general public because they have shown varieties of characteristics that we couldn't have anticipated from planets within our Solar System. To properly analyze the exoplanet populations, we need the tools of statistics. Therefore, in Chapter 1, I describe the science background as well as the statistical methods which will be applied in this thesis. In Chapter 2, I discuss how to train a hierarchical Bayesian model in detail to fit the relationship between masses and radii of exoplanets and categorize exoplanets based on that. A natural application that comes with the model is to use it for future observations of mass/radius and predict the other measurement. Thus I will show two application cases in Chapter 3. Composition of an exoplanet is also very much constrained by its mass and radius. I will show an easy way to constrain the composition of exoplanets in Chapter 4 and discuss how more complicated methods can be applied in future works. Of even greater interest is whether there is life elsewhere in the Universe. Although the future discovery of extraterrestrial life might be totally a fluke, a clear sketched plan always gives us some directions. Research in this area is still very preliminary. Fortunately, besides directly searching for extraterrestrial life, we can also apply statistical reasoning to first estimate the rate of abiogenesis, which will give us some clue on the question of whether there is extraterrestrial life in a probabilistic way. In Chapter 5, I will discuss how different methods can constrain the abiogenesis rate in an informatics perspective. Finally I will give a brief summary in Chapter 6.
155

Bayesian predictive models of user intention

Mestre, María del Rosario January 2015 (has links)
No description available.
156

Bayesian time series learning with Gaussian processes

Frigola-Alcalde, Roger January 2016 (has links)
No description available.
157

Bayesian variable selection for high dimensional data analysis. / CUHK electronic theses & dissertations collection

January 2010 (has links)
In the practice of statistical modeling, it is often desirable to have an accurate predictive model. Modern data sets usually have a large number of predictors. For example, DNA microarray gene expression data usually have the characteristics of fewer observations and larger number of variables. Hence parsimony is especially an important issue. Best-subset selection is a conventional method of variable selection. Due to the large number of variables with relatively small sample size and severe collinearity among the variables, standard statistical methods for selecting relevant variables often face difficulties. / In the third part of the thesis, we propose a Bayesian stochastic search variable selection approach for multi-class classification, which can identify relevant genes by assessing sets of genes jointly. We consider a multinomial probit model with a generalized g-prior for the regression coefficients. An efficient algorithm using simulation-based MCMC methods are developed for simulating parameters from the posterior distribution. This algorithm is robust to the choice of initial value, and produces posterior probabilities of relevant genes for biological interpretation. We demonstrate the performance of the approach with two well- known gene expression profiling data: leukemia data and lymphoma data. Compared with other classification approaches, our approach selects smaller numbers of relevant genes and obtains competitive classification accuracy based on obtained results. / The last part of the thesis is about the further research, which presents a stochastic variable selection approach with different two-level hierarchical prior distributions. These priors can be used as a sparsity-enforcing mechanism to perform gene selection for classification. Using simulation-based MCMC methods for simulating parameters from the posterior distribution, an efficient algorithm can be developed and implemented. / The second part of the thesis proposes a Bayesian stochastic variable selection approach for gene selection based on a probit regression model with a generalized singular g-prior distribution for regression coefficients. Using simulation-based MCMC methods for simulating parameters from the posterior distribution, an efficient and dependable algorithm is implemented. It is also shown that this algorithm is robust to the choice of initial values, and produces posterior probabilities of related genes for biological interpretation. The performance of the proposed approach is compared with other popular methods in gene selection and classification via the well known colon cancer and leukemia data sets in microarray literature. / Yang, Aijun. / Adviser: Xin-Yuan Song. / Source: Dissertation Abstracts International, Volume: 72-04, Section: B, page: . / Thesis (Ph.D.)--Chinese University of Hong Kong, 2010. / Includes bibliographical references (leaves 89-98). / Electronic reproduction. Hong Kong : Chinese University of Hong Kong, [2012] System requirements: Adobe Acrobat Reader. Available via World Wide Web. / Electronic reproduction. Ann Arbor, MI : ProQuest Information and Learning Company, [200-] System requirements: Adobe Acrobat Reader. Available via World Wide Web. / Abstract also in Chinese.
158

Investigation on Bayesian Ying-Yang learning for model selection in unsupervised learning. / CUHK electronic theses & dissertations collection / Digital dissertation consortium

January 2005 (has links)
For factor analysis models, we develop an improved BYY harmony data smoothing learning criterion BYY-HDS in help of considering the dependence between the factors and observations. We make empirical comparisons of the BYY harmony empirical learning criterion BYY-HEC, BYY-HDS, the BYY automatic model selection method BYY-AUTO, AIC, CAIC, BIC, and CV for selecting the number of factors not only on simulated data sets of different sample sizes, noise variances, data dimensions and factor numbers, but also on two real data sets from air pollution data and sport track records, respectively. / Model selection is a critical issue in unsupervised learning. Conventionally, model selection is implemented in two phases by some statistical model selection criterion such as Akaike's information criterion (AIC), Bozdogan's consistent Akaike's information criterion (CAIC), Schwarz's Bayesian inference criterion (BIC) which formally coincides with the minimum description length (MDL) criterion, and the cross-validation (CV) criterion. These methods are very time intensive and may become problematic when sample size is small. Recently, the Bayesian Ying-Yang (BYY) harmony learning has been developed as a unified framework with new mechanisms for model selection and regularization. In this thesis we make a systematic investigation on BYY learning as well as several typical model selection criteria for model selection on factor analysis models, Gaussian mixture models, and factor analysis mixture models. / The most remarkable findings of our study is that BYY-HDS is superior to its counterparts, especially when the sample size is small. AIC, BYY-HEC, BYY-AUTO and CV have a risk of overestimating, while BIC and CAIC have a risk of underestimating in most cases. BYY-AUTO is superior to other methods in a computational cost point of view. The cross-validation method requires the highest computing cost. (Abstract shortened by UMI.) / Hu Xuelei. / "November 2005." / Adviser: Lei Xu. / Source: Dissertation Abstracts International, Volume: 67-07, Section: B, page: 3899. / Thesis (Ph.D.)--Chinese University of Hong Kong, 2005. / Includes bibliographical references (p. 131-142). / Electronic reproduction. Hong Kong : Chinese University of Hong Kong, [2012] System requirements: Adobe Acrobat Reader. Available via World Wide Web. / Electronic reproduction. [Ann Arbor, MI] : ProQuest Information and Learning, [200-] System requirements: Adobe Acrobat Reader. Available via World Wide Web. / Electronic reproduction. Ann Arbor, MI : ProQuest Information and Learning Company, [200-] System requirements: Adobe Acrobat Reader. Available via World Wide Web. / Abstract in English and Chinese. / School code: 1307.
159

Some Bayesian methods for analyzing mixtures of normal distributions. / CUHK electronic theses & dissertations collection / Digital dissertation consortium

January 2003 (has links)
Juesheng Fu. / "April 2003." / Thesis (Ph.D.)--Chinese University of Hong Kong, 2003. / Includes bibliographical references (p. 124-132). / Electronic reproduction. Hong Kong : Chinese University of Hong Kong, [2012] System requirements: Adobe Acrobat Reader. Available via World Wide Web. / Electronic reproduction. Ann Arbor, MI : ProQuest Information and Learning Company, [200-] System requirements: Adobe Acrobat Reader. Available via World Wide Web. / Mode of access: World Wide Web.
160

Investigations on number selection for finite mixture models and clustering analysis.

January 1997 (has links)
by Yiu Ming Cheung. / Thesis (M.Phil.)--Chinese University of Hong Kong, 1997. / Includes bibliographical references (leaves 92-99). / Abstract --- p.i / Acknowledgement --- p.iii / Chapter 1 --- Introduction --- p.1 / Chapter 1.1 --- Background --- p.1 / Chapter 1.1.1 --- Bayesian YING-YANG Learning Theory and Number Selec- tion Criterion --- p.5 / Chapter 1.2 --- General Motivation --- p.6 / Chapter 1.3 --- Contributions of the Thesis --- p.6 / Chapter 1.4 --- Other Related Contributions --- p.7 / Chapter 1.4.1 --- A Fast Number Detection Approach --- p.7 / Chapter 1.4.2 --- Application of RPCL to Prediction Models for Time Series Forecasting --- p.7 / Chapter 1.4.3 --- Publications --- p.8 / Chapter 1.5 --- Outline of the Thesis --- p.8 / Chapter 2 --- Open Problem: How Many Clusters? --- p.11 / Chapter 3 --- Bayesian YING-YANG Learning Theory: Review and Experiments --- p.17 / Chapter 3.1 --- Briefly Review of Bayesian YING-YANG Learning Theory --- p.18 / Chapter 3.2 --- Number Selection Criterion --- p.20 / Chapter 3.3 --- Experiments --- p.23 / Chapter 3.3.1 --- Experimental Purposes and Data Sets --- p.23 / Chapter 3.3.2 --- Experimental Results --- p.23 / Chapter 4 --- Conditions of Number Selection Criterion --- p.39 / Chapter 4.1 --- Alternative Condition of Number Selection Criterion --- p.40 / Chapter 4.2 --- Conditions of Special Hard-cut Criterion --- p.45 / Chapter 4.2.1 --- Criterion Conditions in Two-Gaussian Case --- p.45 / Chapter 4.2.2 --- Criterion Conditions in k*-Gaussian Case --- p.59 / Chapter 4.3 --- Experimental Results --- p.60 / Chapter 4.3.1 --- Purpose and Data Sets --- p.60 / Chapter 4.3.2 --- Experimental Results --- p.63 / Chapter 4.4 --- Discussion --- p.63 / Chapter 5 --- Application of Number Selection Criterion to Data Classification --- p.80 / Chapter 5.1 --- Unsupervised Classification --- p.80 / Chapter 5.1.1 --- Experiments --- p.81 / Chapter 5.2 --- Supervised Classification --- p.82 / Chapter 5.2.1 --- RBF Network --- p.85 / Chapter 5.2.2 --- Experiments --- p.86 / Chapter 6 --- Conclusion and Future Work --- p.89 / Chapter 6.1 --- Conclusion --- p.89 / Chapter 6.2 --- Future Work --- p.90 / Bibliography --- p.92 / Chapter A --- A Number Detection Approach for Equal-and-Isotropic Variance Clusters --- p.100 / Chapter A.1 --- Number Detection Approach --- p.100 / Chapter A.2 --- Demonstration Experiments --- p.102 / Chapter A.3 --- Remarks --- p.105 / Chapter B --- RBF Network with RPCL Approach --- p.106 / Chapter B.l --- Introduction --- p.106 / Chapter B.2 --- Normalized RBF net and Extended Normalized RBF Net --- p.108 / Chapter B.3 --- Demonstration --- p.110 / Chapter B.4 --- Remarks --- p.113 / Chapter C --- Adaptive RPCL-CLP Model for Financial Forecasting --- p.114 / Chapter C.1 --- Introduction --- p.114 / Chapter C.2 --- Extraction of Input Patterns and Outputs --- p.115 / Chapter C.3 --- RPCL-CLP Model --- p.116 / Chapter C.3.1 --- RPCL-CLP Architecture --- p.116 / Chapter C.3.2 --- Training Stage of RPCL-CLP --- p.117 / Chapter C.3.3 --- Prediction Stage of RPCL-CLP --- p.122 / Chapter C.4 --- Adaptive RPCL-CLP Model --- p.122 / Chapter C.4.1 --- Data Pre-and-Post Processing --- p.122 / Chapter C.4.2 --- Architecture and Implementation --- p.122 / Chapter C.5 --- Computer Experiments --- p.125 / Chapter C.5.1 --- Data Sets and Experimental Purpose --- p.125 / Chapter C.5.2 --- Experimental Results --- p.126 / Chapter C.6 --- Conclusion --- p.134 / Chapter D --- Publication List --- p.135 / Chapter D.1 --- Publication List --- p.135

Page generated in 0.0715 seconds