• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 209
  • 189
  • 31
  • 18
  • 12
  • 2
  • 2
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • Tagged with
  • 551
  • 551
  • 214
  • 196
  • 106
  • 101
  • 73
  • 67
  • 66
  • 66
  • 66
  • 57
  • 53
  • 50
  • 49
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
41

Valid estimation and prediction inference in analysis of a computer model

Nagy, Béla 11 1900 (has links)
Computer models or simulators are becoming increasingly common in many fields in science and engineering, powered by the phenomenal growth in computer hardware over the past decades. Many of these simulators implement a particular mathematical model as a deterministic computer code, meaning that running the simulator again with the same input gives the same output. Often running the code involves some computationally expensive tasks, such as solving complex systems of partial differential equations numerically. When simulator runs become too long, it may limit their usefulness. In order to overcome time or budget constraints by making the most out of limited computational resources, a statistical methodology has been proposed, known as the "Design and Analysis of Computer Experiments". The main idea is to run the expensive simulator only at a relatively few, carefully chosen design points in the input space, and based on the outputs construct an emulator (statistical model) that can emulate (predict) the output at new, untried locations at a fraction of the cost. This approach is useful provided that we can measure how much the predictions of the cheap emulator deviate from the real response surface of the original computer model. One way to quantify emulator error is to construct pointwise prediction bands designed to envelope the response surface and make assertions that the true response (simulator output) is enclosed by these envelopes with a certain probability. Of course, to be able to make such probabilistic statements, one needs to introduce some kind of randomness. A common strategy that we use here is to model the computer code as a random function, also known as a Gaussian stochastic process. We concern ourselves with smooth response surfaces and use the Gaussian covariance function that is ideal in cases when the response function is infinitely differentiable. In this thesis, we propose Fast Bayesian Inference (FBI) that is both computationally efficient and can be implemented as a black box. Simulation results show that it can achieve remarkably accurate prediction uncertainty assessments in terms of matching coverage probabilities of the prediction bands and the associated reparameterizations can also help parameter uncertainty assessments.
42

Bayesian model of axon guidance

Duncan Mortimer Unknown Date (has links)
An important mechanism during nervous system development is the guidance of axons by chemical gradients. The structure responsible for responding to chemical cues in the embryonic environment is the axonal growth cone -- a structure combining sensory and motor functions to direct axon growth. In this thesis, we develop a series of mathematical models for the gradient-based guidance of axonal growth cones, based on the idea that growth cones might be optimised for such a task. In particular, we study axon guidance from the framework of Bayesian decision theory, an approach that has recently proved to be very successful in understanding higher level sensory processing problems. We build our models in complexity, beginning with a one-dimensional array of chemoreceptors simply trying to decide whether an external gradient points to the right or the left. Even with this highly simplified model, we can obtain a good fit of theory to experiment. Furthermore, we find that the information a growth cone can obtain about the locations of its receptors has a strong influence on the functional dependence of gradient sensing performance on average concentration. We find that the shape of the sensitivity curve is robust to changes in the precise inference strategy used to determine gradient detection, and depends only on the information the growth cone can obtain about the locations of its receptors. We then consider the optimal distribution of guidance cues for guidance over long range, and find that the same upper limit on guidance distance is reached regardless of whether only bound, or only unbound receptors signal. We also discuss how information from multiple cues ought to be combined for optimal guidance. In chapters 5 and 6, we extend our model to two-dimensions, and to explicitly include temporal dynamics. The two-dimensional case yields results which are essentially equivalent to the one dimensional model. In contrast, explicitly including temporal dynamics in our leads to some significant departures from the one-dimensional and two-dimensional models, depending on the timescales over which various processes operate. Overall, we suggest that decision theory, in addition to providing a useful normative approach to studying growth cone chemotaxis, might provide a framework for understanding some of the biochemical pathways involved in growth cone chemotaxis, and in the chemotaxis of other eukaryotic cells.
43

A Joint Modeling Approach to Studying English Language Proficiency Development and Time-to-Reclassification

Matta, Tyler 01 May 2017 (has links)
The development of academic English proficiency and the time it takes to reclassify to fluent English proficient status are key issues in monitoring achievement of English learners. Yet, little is known about academic English language development at the domain-level (listening, speaking, reading, and writing), or how English language development is associated with time-to-reclassification as an English proficient student. Although the substantive findings surrounding English proficiency and reclassification are of great import, the main focus of this dissertation was methodological: the exploration and testing of joint modeling methods for studying both issues. The first joint model studied was a multilevel, multivariate random effects model that estimated the student-specific and school-specific association between different domains of English language proficiency. The second model was a multilevel shared random effects model that estimated English proficiency development and time-to-reclassification simultaneously and treated the student-specific random effects as latent covariates in the time-to-reclassification model. These joint modeling approaches were illustrated using annual English language proficiency test scores and time-to-reclassification data from a large Arizona school district. Results from the multivariate random effects model revealed correlations greater than .5 among the reading, writing and oral English proficiency random intercepts. The analysis of English proficiency development illustrated that some students had attained proficiency in particular domains at different times, and that some students had not attained proficiency in a particular domain even when their total English proficiency score met the state benchmark for proficiency. These more specific domain score analyses highlight important differences in language development that may have implications for instruction and policy. The shared random effects model resulted in predictions of time-to-reclassification that were 97% accurate compared to 80\% accuracy from a conventional discrete-time hazard model. The time-to-reclassification analysis suggested that use of information about English language development is critical for making accurate predictions of the time a student will reclassify in this Arizona school district.
44

A Bayesian Network Approach to Early Reliability Assessment of Complex Systems

January 2016 (has links)
abstract: Bayesian networks are powerful tools in system reliability assessment due to their flexibility in modeling the reliability structure of complex systems. This dissertation develops Bayesian network models for system reliability analysis through the use of Bayesian inference techniques. Bayesian networks generalize fault trees by allowing components and subsystems to be related by conditional probabilities instead of deterministic relationships; thus, they provide analytical advantages to the situation when the failure structure is not well understood, especially during the product design stage. In order to tackle this problem, one needs to utilize auxiliary information such as the reliability information from similar products and domain expertise. For this purpose, a Bayesian network approach is proposed to incorporate data from functional analysis and parent products. The functions with low reliability and their impact on other functions in the network are identified, so that design changes can be suggested for system reliability improvement. A complex system does not necessarily have all components being monitored at the same time, causing another challenge in the reliability assessment problem. Sometimes there are a limited number of sensors deployed in the system to monitor the states of some components or subsystems, but not all of them. Data simultaneously collected from multiple sensors on the same system are analyzed using a Bayesian network approach, and the conditional probabilities of the network are estimated by combining failure information and expert opinions at both system and component levels. Several data scenarios with discrete, continuous and hybrid data (both discrete and continuous data) are analyzed. Posterior distributions of the reliability parameters of the system and components are assessed using simultaneous data. Finally, a Bayesian framework is proposed to incorporate different sources of prior information and reconcile these different sources, including expert opinions and component information, in order to form a prior distribution for the system. Incorporating expert opinion in the form of pseudo-observations substantially simplifies statistical modeling, as opposed to the pooling techniques and supra Bayesian methods used for combining prior distributions in the literature. The methods proposed are demonstrated with several case studies. / Dissertation/Thesis / Doctoral Dissertation Industrial Engineering 2016
45

Statistical physics for compressed sensing and information hiding / Física Estatística para Compressão e Ocultação de Dados

Antonio André Monteiro Manoel 22 September 2015 (has links)
This thesis is divided into two parts. In the first part, we show how problems of statistical inference and combinatorial optimization may be approached within a unified framework that employs tools from fields as diverse as machine learning, statistical physics and information theory, allowing us to i) design algorithms to solve the problems, ii) analyze the performance of these algorithms both empirically and analytically, and iii) to compare the results obtained with the optimal achievable ones. In the second part, we use this framework to study two specific problems, one of inference (compressed sensing) and the other of optimization (information hiding). In both cases, we review current approaches, identify their flaws, and propose new schemes to address these flaws, building on the use of message-passing algorithms, variational inference techniques, and spin glass models from statistical physics. / Esta tese está dividida em duas partes. Na primeira delas, mostramos como problemas de inferência estatística e de otimização combinatória podem ser abordados sob um framework unificado que usa ferramentas de áreas tão diversas quanto o aprendizado de máquina, a física estatística e a teoria de informação, permitindo que i) projetemos algoritmos para resolver os problemas, ii) analisemos a performance destes algoritmos tanto empiricamente como analiticamente, e iii) comparemos os resultados obtidos com os limites teóricos. Na segunda parte, este framework é usado no estudo de dois problemas específicos, um de inferência (compressed sensing) e outro de otimização (ocultação de dados). Em ambos os casos, revisamos abordagens recentes, identificamos suas falhas, e propomos novos esquemas que visam corrigir estas falhas, baseando-nos sobretudo em algoritmos de troca de mensagens, técnicas de inferência variacional, e modelos de vidro de spin da física estatística.
46

Lógica probabilística baseada em redes Bayesianas relacionais com inferência em primeira ordem. / Probabilistic logic based on Bayesian network with first order inference.

Rodrigo Bellizia Polastro 03 May 2012 (has links)
Este trabalho apresenta três principais contribuições: i. a proposta de uma nova lógica de descrição probabilística; ii. um novo algoritmo de inferência em primeira ordem a ser utilizado em terminologias representadas nessa lógica; e iii. aplicações práticas em problemas reais. A lógica aqui proposta, crALC (credal ALC), adiciona inclusões probabilísticas na popular lógica ALC combinando as terminologias com condições de aciclicidade, de Markov, e adotando uma semântica baseada em interpretações. Como os métodos de inferência exata tradicionalmente apresentam problemas de escalabilidade devido à presença de quantificadores (restrições universal e existencial), apresentamos um algoritmo de loopy propagation em primeira-ordem que se comporta bem para terminologias com domínios não triviais. Uma série de testes foi feita com o algoritmo proposto em comparação com algoritmos tradicionais da literatura; os resultados apresentados mostram uma clara vantagem em relação aos outros algoritmos. São apresentadas ainda duas aplicações da lógica e do algoritmo para resolver problemas reais da área de robótica móvel. Embora os problemas tratados sejam relativamente simples, eles constituem a base de muitos outros problemas da área, sendo um passo importante na representação de conhecimento de agentes/robôs autônomos e no raciocínio sobre esse conhecimento. / This work presents two major contributions: i. a new probabilistic description logic; ii. a new algorithm for inference in terminologies expressed in this logic; iii. practical applications in real tasks. The proposed logic, referred to as crALC (credal ALC), adds probabilistic inclusions to the popular logic ALC, combining the usual acyclicity and Markov conditions, and adopting interpretation-based semantics. As exact inference does not seem scalable due to the presence of quantifiers (existential and universal), we present a first-order loopy propagation algorithm that behaves appropriately for non-trivial domain sizes. A series of tests were done comparing the performance of the proposed algorithm against traditional ones; the presented results are favorable to the first-order algorithm. Two applications in the field of mobile robotics are presented, using the new probabilistic logic and the inference algorithm. Though the problems can be considered simple, they constitute the basis for many other tasks in mobile robotics, being a important step in knowledge representation and in reasoning about it.
47

Ponderação Bayesiana de modelos em regressão linear clássica / Bayesian model averaging in classic linear regression models

Hélio Rubens de Carvalho Nunes 07 October 2005 (has links)
Este trabalho tem o objetivo de divulgar a metodologia de ponderação de modelos ou Bayesian Model Averaging (BMA) entre os pesquisadores da área agronômica e discutir suas vantagens e limitações. Com o BMA é possível combinar resultados de diferentes modelos acerca de determinada quantidade de interesse, com isso, o BMA apresenta-se como sendo uma metodologia alternativa de análise de dados frente os usuais métodos de seleção de modelos tais como o Coeficiente de Determinação Múltipla (R2 ), Coeficiente de Determinação Múltipla Ajustado (R2), Estatística de Mallows ( Cp) e Soma de Quadrados de Predição (PRESS). Vários trabalhos foram, recentemente, realizados com o objetivo de comparar o desempenho do BMA em relação aos métodos de seleção de modelos, porém, há ainda muitas situações para serem exploradas até que se possa chegar a uma conclusão geral acerca desta metodologia. Neste trabalho, o BMA foi aplicado a um conjunto de dados proveniente de um experimento agronômico. A seguir, o desempenho preditivo do BMA foi comparado com o desempenho dos métodos de seleção acima citados por meio de um estudo de simulação variando o grau de multicolinearidade e o tamanho amostral. Em cada uma dessas situações, foram utilizadas 1000 amostras geradas a partir de medidas descritivas de conjuntos de dados reais da área agronômica. O desempenho preditivo das metodologias em comparação foi medido pelo Logaritmo do Escore Preditivo (LEP). Os resultados empíricos obtidos indicaram que o BMA apresenta desempenho semelhante aos métodos usuais de seleção de modelos nas situações de multicolinearidade exploradas neste trabalho. / The objective of this work was divulge to Bayesian Model Averaging (BMA) between the researchers of the agronomy area and discuss its advantages and limitations. With the BMA is possible combine results of difeerent models about determined quantity of interest, with that, the BMA presents as being a metodology alternative of data analysis front the usual models selection approaches, for example the Coefficient of Multiple Determination (R2), Coefficient of Multiple Determination Adjusted (R2), Mallows (Cp Statistics) and Prediction Error Sum Squares (PRESS). Several works recently were carried out with the objective of compare the performance of the BMA regarding the approaches of models selection, however, there is still many situations for will be exploited to that can arrive to a general conclusion about this metodology. In this work, the BMA was applied to data originating from an agronomy experiment. It follow, the predictive performance of the BMA was compared with the performance of the approaches of selection above cited by means of a study of simulation varying the degree of multicollinearity, measured by the number of condition of the matrix standardized X'X and the number of observations in the sample. In each one of those situations, were utilized 1000 samples generated from the descriptive information of agronomy data. The predictive performance of the metodologies in comparison was measured by the Logarithm of the Score Predictive (LEP). The empirical results obtained indicated that the BMA presents similar performance to the usual approaches of selection of models in the situations of multicollinearity exploited.
48

Eficiência de produção: um enfoque Bayesiano. / Production efficiency: a bayesian approach.

Juliana Garcia Cespedes 28 January 2004 (has links)
O uso de fronteira de produ¸c˜ ao estoc´ astica com m´ ultiplos produtos tem despertado um interesse especial em ´areas da economia que defrontam-se com o problema de quantificar a eficiˆencia t´ecnica de firmas. Na estat´ýstica cl´ assica, quando se defronta com firmas que possuem v´arios produtos, as fun¸c˜ oes custo ou demanda s˜ ao mais utilizadas para calcular essa eficiˆencia, mas isso requer uma quantidade maior de informa¸c˜ oes sobre os dados, al´em das quantidades de insumos e produtos, tamb´em s˜ ao necess´ arios seus pre¸cos e custos. Quando existem apenas informa¸c˜ oes sobre os insumos (x) e os produtos (y) h´a a necessidade de se trabalhar com a fun¸c˜ ao de produ¸c˜ ao e a inexistˆencia de estat´ýsticas suficientes para alguns parˆ ametros tornam a an´alise d´ýficil. A abordagem Bayesiana pode se tornar uma ferramenta muito ´ util para esse caso, pois ´e poss´ývel obter uma amostra da distribui¸ c˜ ao de probabilidade dos parˆ ametros do modelo, possibilitando a obten¸c˜ ao de resumos de interesse. Para obter as amostras dessas distribui¸ c˜ oes m´etodos Monte Carlo com cadeias de Markov, tais como, amostrador de Gibbs, Metropolis-Hastings e “Slice sampling” s˜ ao utilizados. / The use of stochastic production frontier with multiple-outputs has been waking up a special interest in areas of the economy that are confronted with the problem of quantifying the technical efficiency of firms. In the classic statistics, when it is confronted with firms that possess several outputs, cost or profit functions are more used to calculate that efficiency, but that requests an amount larger of information about data set, besides the amounts of inputs and outputs, are also necessary your prices and costs. When just exist information on inputs (x) and outputs (y) there is need to work with the production function and the lack of enough statistics for some parameters turn the difficult analysis. Bayesian approach can become a useful tool for that case, because is possible to obtain a sample of the distribution of probability of the parameters of the model, making possible the obtaining of summaries of interest. To obtain samples of those distributions methods Markov chains Monte Carlo, that is, Gibbs sampling, Metropolis-Hastings and Slice sampling are used.
49

Iterative receivers for digital communications via variational inference and estimation

Nissilä, M. (Mauri) 08 January 2008 (has links)
Abstract In this thesis, iterative detection and estimation algorithms for digital communications systems in the presence of parametric uncertainty are explored and further developed. In particular, variational methods, which have been extensively applied in other research fields such as artificial intelligence and machine learning, are introduced and systematically used in deriving approximations to the optimal receivers in various channel conditions. The key idea behind the variational methods is to transform the problem of interest into an optimization problem via an introduction of extra degrees of freedom known as variational parameters. This is done so that, for fixed values of the free parameters, the transformed problem has a simple solution, solving approximately the original problem. The thesis contributes to the state of the art of advanced receiver design in a number of ways. These include the development of new theoretical and conceptual viewpoints of iterative turbo-processing receivers as well as a new set of practical joint estimation and detection algorithms. Central to the theoretical studies is to show that many of the known low-complexity turbo receivers, such as linear minimum mean square error (MMSE) soft-input soft-output (SISO) equalizers and demodulators that are based on the Bayesian expectation-maximization (BEM) algorithm, can be formulated as solutions to the variational optimization problem. This new approach not only provides new insights into the current designs and structural properties of the relevant receivers, but also suggests some improvements on them. In addition, SISO detection in multipath fading channels is considered with the aim of obtaining a new class of low-complexity adaptive SISOs. As a result, a novel, unified method is proposed and applied in order to derive recursive versions of the classical Baum-Welch algorithm and its Bayesian counterpart, referred to as the BEM algorithm. These formulations are shown to yield computationally attractive soft decision-directed (SDD) channel estimators for both deterministic and Rayleigh fading intersymbol interference (ISI) channels. Next, by modeling the multipath fading channel as a complex bandpass autoregressive (AR) process, it is shown that the statistical parameters of radio channels, such as frequency offset, Doppler spread, and power-delay profile, can be conveniently extracted from the estimated AR parameters which, in turn, may be conveniently derived via an EM algorithm. Such a joint estimator for all relevant radio channel parameters has a number of virtues, particularly its capability to perform equally well in a variety of channel conditions. Lastly, adaptive iterative detection in the presence of phase uncertainty is investigated. As a result, novel iterative joint Bayesian estimation and symbol a posteriori probability (APP) computation algorithms, based on the variational Bayesian method, are proposed for both constant-phase channel models and dynamic phase models, and their performance is evaluated via computer simulations.
50

Improving the Computational Efficiency in Bayesian Fitting of Cormack-Jolly-Seber Models with Individual, Continuous, Time-Varying Covariates

Burchett, Woodrow 01 January 2017 (has links)
The extension of the CJS model to include individual, continuous, time-varying covariates relies on the estimation of covariate values on occasions on which individuals were not captured. Fitting this model in a Bayesian framework typically involves the implementation of a Markov chain Monte Carlo (MCMC) algorithm, such as a Gibbs sampler, to sample from the posterior distribution. For large data sets with many missing covariate values that must be estimated, this creates a computational issue, as each iteration of the MCMC algorithm requires sampling from the full conditional distributions of each missing covariate value. This dissertation examines two solutions to address this problem. First, I explore variational Bayesian algorithms, which derive inference from an approximation to the posterior distribution that can be fit quickly in many complex problems. Second, I consider an alternative approximation to the posterior distribution derived by truncating the individual capture histories in order to reduce the number of missing covariates that must be updated during the MCMC sampling algorithm. In both cases, the increased computational efficiency comes at the cost of producing approximate inferences. The variational Bayesian algorithms generally do not estimate the posterior variance very accurately and do not directly address the issues with estimating many missing covariate values. Meanwhile, the truncated CJS model provides a more significant improvement in computational efficiency while inflating the posterior variance as a result of discarding some of the data. Both approaches are evaluated via simulation studies and a large mark-recapture data set consisting of cliff swallow weights and capture histories.

Page generated in 0.0857 seconds