• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 493
  • 44
  • 34
  • 19
  • 8
  • 8
  • 8
  • 8
  • 8
  • 8
  • 4
  • 3
  • 3
  • 3
  • 1
  • Tagged with
  • 647
  • 647
  • 596
  • 583
  • 142
  • 109
  • 105
  • 103
  • 65
  • 61
  • 57
  • 57
  • 52
  • 48
  • 47
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
241

Bayesian analysis of cosmological models.

Moodley, Darell. January 2010 (has links)
In this thesis, we utilise the framework of Bayesian statistics to discriminate between models of the cosmological mass function. We first review the cosmological model and the formation and distribution of galaxy clusters before formulating a statistic within the Bayesian framework, namely the Bayesian razor, that allows model testing of probability distributions. The Bayesian razor is used to discriminate between three popular mass functions, namely the Press-Schechter, Sheth-Tormen and normalisable Tinker models. With a small number of particles in the simulation, we find that the simpler model is preferred due to the Occam’s razor effect, but as the size of the simulation increases the more complex model, if taken to be the true model, is preferred. We establish criteria on the size of the simulation that is required to decisively favour a given model and investigate the dependence of the simulation size on the threshold mass for clusters, and prior probability distributions. Finally we outline how our method can be extended to consider more realistic N-body simulations or be applied to observational data. / Thesis (M.Sc.)-University of KwaZulu-Natal, Westville, 2010.
242

A comparison of Bayesian variable selection approaches for linear models

Rahman, Husneara 03 May 2014 (has links)
Bayesian variable selection approaches are more powerful in discriminating among models regardless of whether these models under investigation are hierarchical or not. Although Bayesian approaches require complex computation, use of theMarkov Chain Monte Carlo (MCMC) methods, such as, Gibbs sampler and Metropolis-Hastings algorithm make computations easier. In this study we investigated the e↵ectiveness of Bayesian variable selection approaches in comparison to other non-Bayesian or classical approaches. For this purpose, we compared the performance of Bayesian versus non-Bayesian variable selection approaches for linear models. Among these approaches, we studied Conditional Predictive Ordinate (CPO) and Bayes factor. Among the non-Bayesian or classical approaches, we implemented adjusted R-square, Akaike Information Criterion (AIC) and Bayes Information Criterion (BIC) for model selection. We performed a simulation study to examine how Bayesian and non- Bayesian approaches perform in selecting variables. We also applied these methods to real data and compared their performances. We observed that for linear models, Bayesian variable selection approaches perform consistently as that of non-Bayesian approaches. / Bayesian inference -- Bayesian inference for normally distributed likekilhood -- Model adequacy -- Simulation approach -- Application to wage data. / Department of Mathematical Sciences
243

Some problems in Bayesian group decisions

Yen, Peng-Fang January 1992 (has links)
One employs the mathematical analysis of decision making when the state of nature is uncertain but further information about it can be obtained by experimentation. Bayesian Decision Theory concerns practical problems of decision making under conditions of uncertainty and also requires the use of statistical and mathematical methods.In this thesis, some basic risk sharing and group decision concepts are provided. Risk is the expected value of the Loss Function of Bayesian Estimators. Group decisions consider situations in which the individuals need to agree both on utilities for consequences and on conditional probability assessments for different experimental outcomes. / Department of Mathematical Sciences
244

From 'tree' based Bayesian networks to mutual information classifiers : deriving a singly connected network classifier using an information theory based technique

Thomas, Clifford S. January 2005 (has links)
For reasoning under uncertainty the Bayesian network has become the representation of choice. However, except where models are considered 'simple' the task of construction and inference are provably NP-hard. For modelling larger 'real' world problems this computational complexity has been addressed by methods that approximate the model. The Naive Bayes classifier, which has strong assumptions of independence among features, is a common approach, whilst the class of trees is another less extreme example. In this thesis we propose the use of an information theory based technique as a mechanism for inference in Singly Connected Networks. We call this a Mutual Information Measure classifier, as it corresponds to the restricted class of trees built from mutual information. We show that the new approach provides for both an efficient and localised method of classification, with performance accuracies comparable with the less restricted general Bayesian networks. To improve the performance of the classifier, we additionally investigate the possibility of expanding the class Markov blanket by use of a Wrapper approach and further show that the performance can be improved by focusing on the class Markov blanket and that the improvement is not at the expense of increased complexity. Finally, the two methods are applied to the task of diagnosing the 'real' world medical domain, Acute Abdominal Pain. Known to be both a different and challenging domain to classify, the objective was to investigate the optiniality claims, in respect of the Naive Bayes classifier, that some researchers have argued, for classifying in this domain. Despite some loss of representation capabilities we show that the Mutual Information Measure classifier can be effectively applied to the domain and also provides a recognisable qualitative structure without violating 'real' world assertions. In respect of its 'selective' variant we further show that the improvement achieves a comparable predictive accuracy to the Naive Bayes classifier and that the Naive Bayes classifier's 'overall' performance is largely due the contribution of the majority group Non-Specific Abdominal Pain, a group of exclusion.
245

Bayesian collocation tempering and generalized profiling for estimation of parameters from differential equation models

Campbell, David Alexander. January 2007 (has links)
The widespread use of ordinary differential equation (ODE) models has long been underrepresented in the statistical literature. The most common methods for estimating parameters from ODE models are nonlinear least squares and an MCMC based method. Both of these methods depend on a likelihood involving the numerical solution to the ODE. The challenge faced by these methods is parameter spaces that are difficult to navigate, exacerbated by the wide variety of behaviours that a single ODE model can produce with respect to small changes in parameter values. / In this work, two competing methods, generalized profile estimation and Bayesian collocation tempering are described. Both of these methods use a basis expansion to approximate the ODE solution in the likelihood, where the shape of the basis expansion, or data smooth, is guided by the ODE model. This approximation to the ODE, smooths out the likelihood surface, reducing restrictions on parameter movement. / Generalized Profile Estimation maximizes the profile likelihood for the ODE parameters while profiling out the basis coefficients of the data smooth. The smoothing parameter determines the balance between fitting the data and the ODE model, and consequently is used to build a parameter cascade, reducing the dimension of the estimation problem. Generalized profile estimation is described with under a constraint to ensure the smooth follows known behaviour such as monotonicity or non-negativity. / Bayesian collocation tempering, uses a sequence posterior densities with smooth approximations to the ODE solution. The level of the approximation is determined by the value of the smoothing parameter, which also determines the level of smoothness in the likelihood surface. In an algorithm similar to parallel tempering, parallel MCMC chains are run to sample from the sequence of posterior densities, while allowing ODE parameters to swap between chains. This method is introduced and tested against a variety of alternative Bayesian models, in terms of posterior variance and rate of convergence. / The performance of generalized profile estimation and Bayesian collocation tempering are tested and compared using simulated data sets from the FitzHugh-Nagumo ODE system and real data from nylon production dynamics.
246

Bayesian statistical models for predicting software effort using small datasets

Van Koten, Chikako, n/a January 2007 (has links)
The need of today�s society for new technology has resulted in the development of a growing number of software systems. Developing a software system is a complex endeavour that requires a large amount of time. This amount of time is referred to as software development effort. Software development effort is the sum of hours spent by all individuals involved. Therefore, it is not equal to the duration of the development. Accurate prediction of the effort at an early stage of development is an important factor in the successful completion of a software system, since it enables the developing organization to allocate and manage their resource effectively. However, for many software systems, accurately predicting the effort is a challenge. Hence, a model that assists in the prediction is of active interest to software practitioners and researchers alike. Software development effort varies depending on many variables that are specific to the system, its developmental environment and the organization in which it is being developed. An accurate model for predicting software development effort can often be built specifically for the target system and its developmental environment. A local dataset of similar systems to the target system, developed in a similar environment, is then used to calibrate the model. However, such a dataset often consists of fewer than 10 software systems, causing a serious problem in the prediction, since predictive accuracy of existing models deteriorates as the size of the dataset decreases. This research addressed this problem with a new approach using Bayesian statistics. This particular approach was chosen, since the predictive accuracy of a Bayesian statistical model is not so dependent on a large dataset as other models. As the size of the dataset decreases to fewer than 10 software systems, the accuracy deterioration of the model is expected to be less than that of existing models. The Bayesian statistical model can also provide additional information useful for predicting software development effort, because it is also capable of selecting important variables from multiple candidates. In addition, it is parametric and produces an uncertainty estimate. This research developed new Bayesian statistical models for predicting software development effort. Their predictive accuracy was then evaluated in four case studies using different datasets, and compared with other models applicable to the same small dataset. The results have confirmed that the best new models are not only accurate but also consistently more accurate than their regression counterpart, when calibrated with fewer than 10 systems. They can thus replace the regression model when using small datasets. Furthermore, one case study has shown that the best new models are more accurate than a simple model that predicts the effort by calculating the average value of the calibration data. Two case studies has also indicated that the best new models can be more accurate for some software systems than a case-based reasoning model. Since the case studies provided sufficient empirical evidence that the new models are generally more accurate than existing models compared, in the case of small datasets, this research has produced a methodology for predicting software development effort using the new models.
247

A local likelihood active contour model of medical image segmentation

Zhang, Jie. January 2007 (has links)
Thesis (M.S.)--Ohio University, August, 2007. / Title from PDF t.p. Includes bibliographical references.
248

Dynamic Bayesian networks for online stochastic modeling

Cho, Hyun Cheol. January 2006 (has links)
Thesis (Ph. D.)--University of Nevada, Reno, 2006. / "August, 2006." Includes bibliographical references (leaves 124-135). Online version available on the World Wide Web.
249

Effective decision-theoretic assistance through relational hierarchical models /

Natarajan, Sriraam. January 1900 (has links)
Thesis (Ph. D.)--Oregon State University, 2008. / Printout. Includes bibliographical references (leaves 146-150). Also available on the World Wide Web.
250

Sampling-based Bayesian latent variable regression methods with applications in process engineering

Chen, Hongshu, January 2007 (has links)
Thesis (Ph. D.)--Ohio State University, 2007. / Title from first page of PDF file. Includes bibliographical references (p. 168-180).

Page generated in 0.1103 seconds