• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 2011
  • 601
  • 260
  • 258
  • 61
  • 32
  • 26
  • 19
  • 15
  • 14
  • 8
  • 7
  • 6
  • 6
  • 5
  • Tagged with
  • 4097
  • 795
  • 750
  • 722
  • 715
  • 704
  • 696
  • 655
  • 566
  • 445
  • 427
  • 416
  • 398
  • 365
  • 310
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
131

Contributions to Bayesian wavelet shrinkage

Remenyi, Norbert 07 November 2012 (has links)
This thesis provides contributions to research in Bayesian modeling and shrinkage in the wavelet domain. Wavelets are a powerful tool to describe phenomena rapidly changing in time, and wavelet-based modeling has become a standard technique in many areas of statistics, and more broadly, in sciences and engineering. Bayesian modeling and estimation in the wavelet domain have found useful applications in nonparametric regression, image denoising, and many other areas. In this thesis, we build on the existing techniques and propose new methods for applications in nonparametric regression, image denoising, and partially linear models. The thesis consists of an overview chapter and four main topics. In Chapter 1, we provide an overview of recent developments and the current status of Bayesian wavelet shrinkage research. The chapter contains an extensive literature review consisting of almost 100 references. The main focus of the overview chapter is on nonparametric regression, where the observations come from an unknown function contaminated with Gaussian noise. We present many methods which employ model-based and adaptive shrinkage of the wavelet coefficients through Bayes rules. These includes new developments such as dependence models, complex wavelets, and Markov chain Monte Carlo (MCMC) strategies. Some applications of Bayesian wavelet shrinkage, such as curve classification, are discussed. In Chapter 2, we propose the Gibbs Sampling Wavelet Smoother (GSWS), an adaptive wavelet denoising methodology. We use the traditional mixture prior on the wavelet coefficients, but also formulate a fully Bayesian hierarchical model in the wavelet domain accounting for the uncertainty of the prior parameters by placing hyperpriors on them. Since a closed-form solution to the Bayes estimator does not exist, the procedure is computational, in which the posterior mean is computed via MCMC simulations. We show how to efficiently develop a Gibbs sampling algorithm for the proposed model. The developed procedure is fully Bayesian, is adaptive to the underlying signal, and provides good denoising performance compared to state-of-the-art methods. Application of the method is illustrated on a real data set arising from the analysis of metabolic pathways, where an iterative shrinkage procedure is developed to preserve the mass balance of the metabolites in the system. We also show how the methodology can be extended to complex wavelet bases. In Chapter 3, we propose a wavelet-based denoising methodology based on a Bayesian hierarchical model using a double Weibull prior. The interesting feature is that in contrast to the mixture priors traditionally used by some state-of-the-art methods, the wavelet coefficients are modeled by a single density. Two estimators are developed, one based on the posterior mean and the other based on the larger posterior mode; and we show how to calculate these estimators efficiently. The methodology provides good denoising performance, comparable even to state-of-the-art methods that use a mixture prior and an empirical Bayes setting of hyperparameters; this is demonstrated by simulations on standard test functions. An application to a real-word data set is also considered. In Chapter 4, we propose a wavelet shrinkage method based on a neighborhood of wavelet coefficients, which includes two neighboring coefficients and a parental coefficient. The methodology is called Lambda-neighborhood wavelet shrinkage, motivated by the shape of the considered neighborhood. We propose a Bayesian hierarchical model using a contaminated exponential prior on the total mean energy in the Lambda-neighborhood. The hyperparameters in the model are estimated by the empirical Bayes method, and the posterior mean, median, and Bayes factor are obtained and used in the estimation of the total mean energy. Shrinkage of the neighboring coefficients is based on the ratio of the estimated and observed energy. The proposed methodology is comparable and often superior to several established wavelet denoising methods that utilize neighboring information, which is demonstrated by extensive simulations. An application to a real-world data set from inductance plethysmography is considered, and an extension to image denoising is discussed. In Chapter 5, we propose a wavelet-based methodology for estimation and variable selection in partially linear models. The inference is conducted in the wavelet domain, which provides a sparse and localized decomposition appropriate for nonparametric components with various degrees of smoothness. A hierarchical Bayes model is formulated on the parameters of this representation, where the estimation and variable selection is performed by a Gibbs sampling procedure. For both the parametric and nonparametric part of the model we are using point-mass-at-zero contamination priors with a double exponential spread distribution. In this sense we extend the model of Chapter 2 to partially linear models. Only a few papers in the area of partially linear wavelet models exist, and we show that the proposed methodology is often superior to the existing methods with respect to the task of estimating model parameters. Moreover, the method is able to perform Bayesian variable selection by a stochastic search for the parametric part of the model.
132

USE OF APRIORI KNOWLEDGE ON DYNAMIC BAYESIAN MODELS IN TIME-COURSE EXPRESSION DATA PREDICTION

Kilaru, Gokhul Krishna 20 March 2012 (has links)
Indiana University-Purdue University Indianapolis (IUPUI) / Bayesian networks, one of the most widely used techniques to understand or predict the future by making use of current or previous data, have gained credence over the last decade for their ability to simulate large gene expression datasets to track and predict the reasons for changes in biological systems. In this work, we present a dynamic Bayesian model with gene annotation scores such as the gene characterization index (GCI) and the GenCards inferred functionality score (GIFtS) to understand and assess the prediction performance of the model by incorporating prior knowledge. Time-course breast cancer data including expression data about the genes in the breast cell-lines when treated with doxorubicin is considered for this study. Bayes server software was used for the simulations in a dynamic Bayesian environment with 8 and 19 genes on 12 different data combinations for each category of gene set to predict and understand the future time- course expression profiles when annotation scores are incorporated into the model. The 8-gene set predicted the next time course with r>0.95, and the 19-gene set yielded a value of r>0.8 in 92% cases of the simulation experiments. These results showed that incorporating prior knowledge into the dynamic Bayesian model for simulating the time- course expression data can improve the prediction performance when sufficient apriori parameters are provided.
133

Bayesian belief networks for dementia diagnosis and other applications : a comparison of hand-crafting and construction using a novel data driven technique

Oteniya, Lloyd January 2008 (has links)
The Bayesian network (BN) formalism is a powerful representation for encoding domains characterised by uncertainty. However, before it can be used it must first be constructed, which is a major challenge for any real-life problem. There are two broad approaches, namely the hand-crafted approach, which relies on a human expert, and the data-driven approach, which relies on data. The former approach is useful, however issues such as human bias can introduce errors into the model. We have conducted a literature review of the expert-driven approach, and we have cherry-picked a number of common methods, and engineered a framework to assist non-BN experts with expert-driven construction of BNs. The latter construction approach uses algorithms to construct the model from a data set. However, construction from data is provably NP-hard. To solve this problem, approximate, heuristic algorithms have been proposed; in particular, algorithms that assume an order between the nodes, therefore reducing the search space. However, traditionally, this approach relies on an expert providing the order among the variables --- an expert may not always be available, or may be unable to provide the order. Nevertheless, if a good order is available, these order-based algorithms have demonstrated good performance. More recent approaches attempt to ''learn'' a good order then use the order-based algorithm to discover the structure. To eliminate the need for order information during construction, we propose a search in the entire space of Bayesian network structures --- we present a novel approach for carrying out this task, and we demonstrate its performance against existing algorithms that search in the entire space and the space of orders. Finally, we employ the hand-crafting framework to construct models for the task of diagnosis in a ''real-life'' medical domain, dementia diagnosis. We collect real dementia data from clinical practice, and we apply the data-driven algorithms developed to assess the concordance between the reference models developed by hand and the models derived from real clinical data.
134

Bayesian Reinforcement Learning Methods for Network Intrusion Prevention

Nesti Lopes, Antonio Frederico January 2021 (has links)
A growing problem in network security stems from the fact that both attack methods and target systems constantly evolve. This problem makes it difficult for human operators to keep up and manage the security problem. To deal with this challenge, a promising approach is to use reinforcement learning to adapt security policies to a changing environment. However, a drawback of this approach is that traditional reinforcement learning methods require a large amount of data in order to learn effective policies, which can be both costly and difficult to obtain. To address this problem, this thesis investigates ways to incorporate prior knowledge in learning systems for network security. Our goal is to be able to learn security policies with less data compared to traditional reinforcement learning algorithms. To investigate this question, we take a Bayesian approach and consider Bayesian reinforcement learning methods as a complement to current algorithms in reinforcement learning. Specifically, in this work, we study the following algorithms: Bayesian Q-learning, Bayesian REINFORCE, and Bayesian Actor-Critic. To evaluate our approach, we have implemented the mentioned algorithms and techniques and applied them to different simulation scenarios of intrusion prevention. Our results demonstrate that the Bayesian reinforcement learning algorithms are able to learn more efficiently compared to their non-Bayesian counterparts but that the Bayesian approach is more computationally demanding. Further, we find that the choice of prior and the kernel function have a large impact on the performance of the algorithms. / Ett växande problem inom cybersäkerhet är att både attackmetoder samt system är i en konstant förändring och utveckling: å ena sidan så blir attackmetoder mer och mer sofistikerade, och å andra sidan så utvecklas system via innovationer samt uppgraderingar. Detta problem gör det svårt för mänskliga operatörer att hantera säkerhetsproblemet. En lovande metod för att hantera denna utmaning är förstärkningslärande. Med förstärkningslärande kan en autonom agent automatiskt lära sig att anpassa säkerhetsstrategier till en föränderlig miljö. En utmaning med detta tillvägagångsätt är dock att traditionella förstärkningsinlärningsmetoder kräver en stor mängd data för att lära sig effektiva strategier, vilket kan vara både kostsamt och svårt att erskaffa. För att lösa detta problem så undersöker denna avhandling Bayesiska metoder för att inkorporera förkunskaper i inlärningsalgoritmen, vilket kan möjliggöra lärande med mindre data. Specifikt så studerar vi följande Bayesiska algoritmer: Bayesian Q-learning, Bayesian REINFORCE och Bayesian Actor- Critic. För att utvärdera vårt tillvägagångssätt har vi implementerat de nämnda algoritmerna och utvärderat deras prestanda i olika simuleringsscenarier för intrångsförebyggande samt analyserat deras komplexitet. Våra resultat visar att de Bayesiska förstärkningsinlärningsalgoritmerna kan användas för att lära sig strategier med mindre data än vad som kravs vid användande av icke-Bayesiska motsvarigheter, men att den Bayesiska metoden är mer beräkningskrävande. Vidare finner vi att metoden för att inkorporera förkunskap i inlärningsalgoritmen, samt val av kernelfunktion, har stor inverkan på algoritmernas prestanda.
135

Evaluation of one classical and two Bayesian estimators of system availability using multiple attribute decision making techniques

McCahon, Cynthia S January 2011 (has links)
Typescript (photocopy). / Digitized by Kansas Correctional Industries
136

Alzheimer's disease heterogeneity assessment using high dimensional clustering techniques

Poulakis, Konstantinos January 2016 (has links)
This thesis sets out to investigate the Alzheimer's disease (AD) heterogeneity in an unsupervised framework. Different subtypes of AD were identified in the past from a number of studies. The major objective of the thesis is to apply clustering methods that are specialized in coping with high dimensional data sets, in a sample of AD patients. The evaluation of these clustering methods and the interpretation of the clustered groups from a statistical and a medical point of view, are some of the additional objectives. The data consist of 271 MRI images of AD patients from the AddNeuroMed and the ADNI cohorts. The raw MRI's have been preprocessed with the software Freesurfer and 82 cortical and subcortical volumes have been extracted for the needs of the analysis. The effect of different strategies in the initialization of a modified Gaussian Mixed Model (GMM) (Bouveyron et al, 2007) has been studied. Additionally, the GMM and a Bayesian clustering method proposed by Nia (2009) have been compared with respect to their performances in various distance based evaluation criteria. The later method resulted in the most compact and isolated clusters. The optimal numbers of clusters was evaluated with the Hopkins statistic and 6 clusters were decided while 2 observations formed an outlier cluster. Different patterns of atrophy were discovered in the 6 clusters. One cluster presented atrophy in the medial temporal area only (n=37,~13.65%). Another cluster resented atrophy in the lateral and medial temporal lobe and parts of the parietal lobe (n=39,~14.4%). A third cluster presented atrophy in temporoparietal areas but also in the frontal lobe (n=74,~27.3%). The remaining three clusters presented diffuse atrophy in nearly all the association cortices with some variation in the patterns (n1=40,~14.7%,n2=58,~21.4,n3=21,7.7%). The 6 subtypes also differed in their demographical, clinical and pathological features.
137

BAYESIAN DECISION ANALYSIS OF A STATISTICAL RAINFALL/RUNOFF RELATION

Gray, Howard Axtell 10 1900 (has links)
The first purpose of this thesis is to provide a framework for the inclusion of data from a secondary source in Bayesian decision analysis as an aid in decision making under uncertainty. A second purpose is to show that the Bayesian procedures can be implemented on a computer to obtain accurate results at little expense in computing time. The state variables of a bridge design example problem are the unknown parameters of the probability distribution of the primary data. The primary source is the annual peak flow data for the stream being spanned. Information pertinent to the choice of bridge design is contained in rainfall data from gauges on the watershed but the distribution of this secondary data cannot be directly expressed in terms of the state variables. This study shows that a linear regression equation relating the primary and secondary data provides a means of using secondary data for finding the Bayes risk and expected opportunity loss associated with any particular bridge design and single new rainfall observation. The numerical results for the example problem indicate that the information gained from the rainfall data reduces the Bayes risk and expected opportunity loss and allows for a more economical structural design. Furthermore, the careful choice of the numerical methods employed reduces the computation time for these quantities to a level acceptable to any budget.
138

Statistical analysis in downscaling climate models : wavelet and Bayesian methods in multimodel ensembles

Cai, Yihua 2009 August 1900 (has links)
Various climate models have been developed to analyze and predict climate change; however, model uncertainties cannot be easily overcome. A statistical approach has been presented in this paper to calculate the distributions of future climate change based on an ensemble of the Weather Research and Forecasting (WRF) models. Wavelet analysis has been adopted to de-noise the WRF model output. Using the de-noised model output, we carry out Bayesian analysis to decrease uncertainties in model CAM_KF, RRTM_KF and RRTM_GRELL for each downscaling region. / text
139

Bayesian passive sonar tracking in the context of active-passive data fusion

Yocom, Bryan Alan 2009 August 1900 (has links)
This thesis investigates the improvements that can be made to Bayesian passive sonar tracking in the context of active-passive sonar data fusion. Performance improvements are achieved by exploiting the prior information available within a typical Bayesian data fusion framework. The algorithms developed are tested against both simulated data and data measured during the SEABAR 07 sea trial. Results show that the proposed approaches achieve improved detection, decreased estimation error, and the ability to track quiet targets in the presence of loud interferers. / text
140

Mixtures of triangular densities with applications to Bayesian mode regressions

Ho, Chi-San 22 September 2014 (has links)
The main focus of this thesis is to develop full parametric and semiparametric Bayesian inference for data arising from triangular distributions. A natural consequence of working with such distributions is it allows one to consider regression models where the response variable is now the mode of the data distribution. A new family of nonparametric prior distributions is developed for a certain class of convex densities of particular relevance to mode regressions. Triangular distributions arise in several contexts such as geosciences, econometrics, finance, health care management, sociology, reliability engineering, decision and risk analysis, etc. In many fields, experts, typically, have a reasonable idea about the range and most likely values that define a data distribution. Eliciting these quantities is thus, generally, easier than eliciting moments of other commonly known distributions. Using simulated and actual data, applications of triangular distributions, with and without mode regressions, in some of the aforementioned areas are tackled. / text

Page generated in 0.0383 seconds