• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 2043
  • 601
  • 261
  • 260
  • 61
  • 32
  • 26
  • 19
  • 15
  • 14
  • 10
  • 8
  • 6
  • 6
  • 5
  • Tagged with
  • 4139
  • 811
  • 760
  • 730
  • 720
  • 719
  • 711
  • 660
  • 576
  • 450
  • 432
  • 416
  • 408
  • 369
  • 314
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
71

[en] LINEAR GROWTH BAYESIAN MODEL USING DISCOUNT FACTORS / [pt] MODELO BAYESIANO DE CRESCIMENTO LINEAR COM DESCONTOS

CRISTIANO AUGUSTO COELHO FERNANDES 17 November 2006 (has links)
[pt] O objetivo principal desta dissertação é descrever e discutir o Modelo Bayesiano de Crescimento Linear Sazonal, formulação Estados múltiplos, utilizando descontos. As idéias originais deste modelo foram desenvolvidas por Ameen e Harrison. Na primeira parte do trabalho (capítulos 2 e 3) apresentamos idéias bem gerais sobre Séries Temporais e os principais modelos da literatura. A segunda parte (capítulos 4, 5 e 6) é dedicada à Estatística Bayesiana (conceitos gerais), ao MDL na sua formulação original, e ao nosso modelo de interesse. São apresentadas algumas sugestões operacionais e um fluxograma de operação do modelo, com vistas a uma futura implementação computacional. / [en] The aim of this thesis is to discuss in details the Multiprocess Linear Grawth Bayesian Model for seasonal and/or nonseasonal series, using discount factors. The original formulation of this model was put forward recently by Ameen and Harrison. In the first part of the thesis (chapters 2 and 3) we show some general concepts related to time series and time series modelling, whereas in the second (chapters 4, 5 and 6) we formally presented / the Bayesian formulation of the proposed model. A flow chart and some optional parameter setings aiming a computational implementation is also presented.
72

Comparison of Bayesian learning and conjugate gradient descent training of neural networks

Nortje, W D 09 November 2004 (has links)
Neural networks are used in various fields to make predictions about the future value of a time series, or about the class membership of a given object. For the network to be effective, it needs to be trained on a set of training data combined with the expected results. Two aspects to keep in mind when considering a neural network as a solution, are the required training time and the prediction accuracy. This research compares the classification accuracy of conjugate gradient descent neural networks and Bayesian learning neural networks. Conjugate gradient descent networks are known for their short training times, but are not very consistent and results are heavily dependant on initial training conditions. Bayesian networks are slower, but much more consistent. The two types of neural networks are compared, and some attempts are made to combine their strong points in order to achieve shorter training times while maintaining a high classification accuracy. Bayesian learning outperforms the gradient descent methods by almost 1%, while the hybrid method achieves results between those of Bayesian learning and gradient descent. The drawback of the hybrid method is that there is no speed improvement above that of Bayesian learning. / Dissertation (MEng (Electronics))--University of Pretoria, 2005. / Electrical, Electronic and Computer Engineering / unrestricted
73

A Bayesian Subgroup Analysis Using An Additive Model

Xiao, Yang January 2013 (has links)
No description available.
74

Bayesian decision-makers reaching consensus using expert information

Garisch, I. January 2009 (has links)
Published Article / The paper is concerned with the problem of Bayesian decision-makers seeking consensus about the decision that should be taken from a decision space. Each decision-maker has his own utility function and it is assumed that the parameter space has two points, Θ = {θ1,θ2 }. The initial probabilities of the decision-makers for Θ can be updated by information provided by an expert. The decision-makers have an opinion about the expert and this opinion is formed by the observation of the expert's performance in the past. It is shown how the decision-makers can decide beforehand, on the basis of this opinion, whether the consultation of an expert will result in consensus.
75

Sleeping Beauty: A New Problem for Halfers

Nielsen, Michael 12 August 2014 (has links)
I argue against the halfer response to the Sleeping Beauty case by presenting a new problem for halfers. When the original Sleeping Beauty case is generalized, it follows from the halfer’s key premise that Beauty must update her credence in a fair coin’s landing heads in such a way that it becomes arbitrarily close to certainty. This result is clearly absurd. I go on to argue that the halfer’s key premise must be rejected on pain of absurdity, leaving the halfer response to the original Sleeping Beauty case unsupported. I consider two ways that halfers might avoid the absurdity without giving up their key premise. Neither way succeeds. My argument lends support to the thirder response, and, in particular, to the idea that agents may be rationally compelled to update their beliefs despite not having learned any new evidence.
76

Sampling approaches in Bayesian computational statistics with R

Sun, Wenwen 27 August 2010 (has links)
Bayesian analysis is definitely different from the classic statistical methods. Although, both of them use subjective ideas, it is used in the selection of models in the classic statistical methods, rather than as an explicit part in Bayesian models, which allows the combination of subjective ideas with the data collected, update the prior information and improve inferences. Drastic growth of Bayesian applications indicates it becomes more and more popular, because the advent of computational methods (e.g., MCMC) renders sophisticated analysis. In Bayesian framework, the flexibility and generality allows it to cope with very complex problems. One big obstacle in earlier Bayesian analysis is how to sample from the usually complex posterior distribution. With modern techniques and fast-developed computation capacity, we now have tools to solve this problem. We discuss Acceptance-Rejection sampling, importance sampling and then the MCMC methods. Metropolis-Hasting algorithm, as a very versatile, efficient and powerful simulation technique to construct a Markov Chain, borrows the idea from the well-known acceptance-rejection sampling to generate candidates that are either accepted or rejected, but then retains the current values when rejection takes place (1). A special case of Metropolis-Hasting algorithm is Gibbs Sampler. When dealing with high dimensional problems, Gibbs Sampler doesn’t require a decent proposal distribution. It generates the Markov Chain through univariate conditional probability distribution, which greatly simplifies problems. We illustrate the use of those approaches with examples (with R codes) to provide a thorough review. Those basic methods have variants to deal with different situations. And they are building blocks for more advanced problems. This report is not a tutorial for statistics or the software R. The author assumes that readers are familiar with basic statistical concepts and common R statements. If needed, a detailed instruction of R programming can be found in the Comprehensive R Archive Network (CRAN): http://cran.R-project.org / text
77

Efficient implementation of Markov chain Monte Carlo

Fan, Yanan January 2001 (has links)
No description available.
78

Bayesian locally weighted online learning

Edakunni, Narayanan U. January 2010 (has links)
Locally weighted regression is a non-parametric technique of regression that is capable of coping with non-stationarity of the input distribution. Online algorithms like Receptive FieldWeighted Regression and Locally Weighted Projection Regression use a sparse representation of the locally weighted model to approximate a target function, resulting in an efficient learning algorithm. However, these algorithms are fairly sensitive to parameter initializations and have multiple open learning parameters that are usually set using some insights of the problem and local heuristics. In this thesis, we attempt to alleviate these problems by using a probabilistic formulation of locally weighted regression followed by a principled Bayesian inference of the parameters. In the Randomly Varying Coefficient (RVC) model developed in this thesis, locally weighted regression is set up as an ensemble of regression experts that provide a local linear approximation to the target function. We train the individual experts independently and then combine their predictions using a Product of Experts formalism. Independent training of experts allows us to adapt the complexity of the regression model dynamically while learning in an online fashion. The local experts themselves are modeled using a hierarchical Bayesian probability distribution with Variational Bayesian Expectation Maximization steps to learn the posterior distributions over the parameters. The Bayesian modeling of the local experts leads to an inference procedure that is fairly insensitive to parameter initializations and avoids problems like overfitting. We further exploit the Bayesian inference procedure to derive efficient online update rules for the parameters. Learning in the regression setting is also extended to handle a classification task by making use of a logistic regression to model discrete class labels. The main contribution of the thesis is a spatially localised online learning algorithm set up in a probabilistic framework with principled Bayesian inference rule for the parameters of the model that learns local models completely independent of each other, uses only local information and adapts the local model complexity in a data driven fashion. This thesis, for the first time, brings together the computational efficiency and the adaptability of ‘non-competitive’ locally weighted learning schemes and the modelling guarantees of the Bayesian formulation.
79

Bayesian numerical and approximation techniques for ARMA time series

Marriott, John M. January 1989 (has links)
No description available.
80

Neural networks and classification trees for misclassified data

Kalkandara, Karolina January 1998 (has links)
No description available.

Page generated in 0.06 seconds