• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 21
  • 7
  • 6
  • 3
  • 2
  • 1
  • 1
  • 1
  • Tagged with
  • 51
  • 51
  • 15
  • 13
  • 11
  • 10
  • 10
  • 9
  • 9
  • 8
  • 7
  • 6
  • 6
  • 5
  • 5
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
11

Bayesian Semiparametric Models for Heterogeneous Cross-platform Differential Gene Expression

Dhavala, Soma Sekhar 2010 December 1900 (has links)
We are concerned with testing for differential expression and consider three different aspects of such testing procedures. First, we develop an exact ANOVA type model for discrete gene expression data, produced by technologies such as a Massively Parallel Signature Sequencing (MPSS), Serial Analysis of Gene Expression (SAGE) or other next generation sequencing technologies. We adopt two Bayesian hierarchical models—one parametric and the other semiparametric with a Dirichlet process prior that has the ability to borrow strength across related signatures, where a signature is a specific arrangement of the nucleotides. We utilize the discreteness of the Dirichlet process prior to cluster signatures that exhibit similar differential expression profiles. Tests for differential expression are carried out using non-parametric approaches, while controlling the false discovery rate. Next, we consider ways to combine expression data from different studies, possibly produced by different technologies resulting in mixed type responses, such as Microarrays and MPSS. Depending on the technology, the expression data can be continuous or discrete and can have different technology dependent noise characteristics. Adding to the difficulty, genes can have an arbitrary correlation structure both within and across studies. Performing several hypothesis tests for differential expression could also lead to false discoveries. We propose to address all the above challenges using a Hierarchical Dirichlet process with a spike-and-slab base prior on the random effects, while smoothing splines model the unknown link functions that map different technology dependent manifestations to latent processes upon which inference is based. Finally, we propose an algorithm for controlling different error measures in a Bayesian multiple testing under generic loss functions, including the widely used uniform loss function. We do not make any specific assumptions about the underlying probability model but require that indicator variables for the individual hypotheses are available as a component of the inference. Given this information, we recast multiple hypothesis testing as a combinatorial optimization problem and in particular, the 0-1 knapsack problem which can be solved efficiently using a variety of algorithms, both approximate and exact in nature.
12

Bayesian Models for Multilingual Word Alignment

Östling, Robert January 2015 (has links)
In this thesis I explore Bayesian models for word alignment, how they can be improved through joint annotation transfer, and how they can be extended to parallel texts in more than two languages. In addition to these general methodological developments, I apply the algorithms to problems from sign language research and linguistic typology. In the first part of the thesis, I show how Bayesian alignment models estimated with Gibbs sampling are more accurate than previous methods for a range of different languages, particularly for languages with few digital resources available—which is unfortunately the state of the vast majority of languages today. Furthermore, I explore how different variations to the models and learning algorithms affect alignment accuracy. Then, I show how part-of-speech annotation transfer can be performed jointly with word alignment to improve word alignment accuracy. I apply these models to help annotate the Swedish Sign Language Corpus (SSLC) with part-of-speech tags, and to investigate patterns of polysemy across the languages of the world. Finally, I present a model for multilingual word alignment which learns an intermediate representation of the text. This model is then used with a massively parallel corpus containing translations of the New Testament, to explore word order features in 1001 languages.
13

Evidence-Based Hospitals

Bardach, David R 01 January 2015 (has links)
In 2011 the University of Kentucky opened the first two inpatient floors of its new hospital. With an estimated cost of over $872 million, the new facility represents a major investment in the future of healthcare in Kentucky. This facility is outfitted with many features that were not present in the old hospital, with the expectation that they would improve the quality and efficiency of patient care. After one year of occupancy, hospital administration questioned the effectiveness of some features. Through focus groups of key stakeholders, surveys of frontline staff, and direct observational data, this dissertation evaluates the effectiveness of two such features, namely the ceiling-based patient lifts and the placement of large team meeting spaces on every unit, while also describing methods that can improve the overall state of quality improvement research in healthcare.
14

Computational, experimental, and statistical analyses of social learning in humans and animals

Whalen, Andrew January 2016 (has links)
Social learning is ubiquitous among animals and humans and is thought to be critical to the widespread success of humans and to the development and evolution of human culture. Evolutionary theory, however, suggests that social learning alone may not be adaptive but that individuals may need to be selective in who and how they copy others. One of the key findings of these evolutionary models (reviewed in Chapter 1) is that social information may be widely adaptive if individuals are able to combine social and asocial sources of information together strategically. However, up until this point the focus of theoretic models has been on the population level consequences of different social learning strategies, and not on how individuals combine social and asocial information on specific tasks. In Chapter 2 I carry out an analysis of how animal learners might incorporate social information into a reinforcement learning framework and find that even limited, low-fidelity copying of actions in an action sequence may combine with asocial learning to result in high fidelity transmission of entire action sequences. In Chapter 3 I describe a series of experiments that find that human learners flexibly use a conformity biased learning strategy to learn from multiple demonstrators depending on demonstrator accuracy, either indicated by environmental cues or past experience with these demonstrators. The chapter reveals close quantitative and qualitative matches between participant's performance and a Bayesian model of social learning. In both Chapters 2 and 3 I find, consistent with previous evolutionary findings, that by combining social and asocial sources of information together individuals are able to learn about the world effectively. Exploring how animals use social learning experimentally can be a substantially more difficult task than exploring human social learning. In Chapter 4, I develop and present a refined version of Network Based Diffusion analysis to provide a statistical framework for inferring social learning mechanisms from animal diffusion experiments. In Chapter 5 I move from examining the effects of social learning at an individual level to examining their population level outcomes and provide an analysis of how fine-grained population structure may alter the spread of novel behaviours through a population. I find that although a learner's social learning strategy and the learnability of a novel behaviour strongly impact how likely the behaviour is to spread through the population, fine grained population structure plays a much smaller role. In Chapter 6 I summarize the results of this thesis, and provide suggestions for future work to understand how individuals, humans and other animals alike, use social information.
15

Whitney Element Based Priors for Hierarchical Bayesian Models

Israeli, Yeshayahu D. 21 June 2021 (has links)
No description available.
16

Changing Criteria: What Decision Processes Reveal about Confidence in Memory

Castillo, Johanny N 28 October 2022 (has links)
Source memory is our ability to relate central information (the “item”) to the context (the “source”) in which it was learned or experienced. People are often highly confident in their source judgements even when this information is incorrectly recalled. Past work has aimed to explain why source errors made with high confidence occur with a framework called the Converging Criteria (CC) account. The CC account posits that item memory can interact with source memory by altering decision criteria as item confidence increases, increasing the probability of a high confidence source judgement. This prediction differs from alternate models, like the Fixed Criteria (FC) account, where decision criteria are not expected to change with item confidence. The current study not only tests the implications of the CC account, but contrasts it to the predictions of the FC account relative to item memory, item confidence, and source discriminability, using existing data from 12 recognition memory experiments. We use a Bayesian Hierarchical model to estimate a key metric called the Item Confidence Effect (ICE) - the change in the proportion of source errors made with high confidence as item confidence increases. Results show a positive ICE, demonstrating that the proportion of source errors made with high confidence increases with item confidence, as predicted by the CC account. In the context of memory, this evidence shows that decision processes can influence behavior, regardless if evidence in memory supports it or not.
17

Adjusting for Bounding and Time-in-Sample Eects in the National Crime Victimization Survey (NCVS) Property Crime Rate Estimation

Yang, Hui 08 June 2016 (has links)
No description available.
18

Sequential sampling models of the flanker task: Model comparison and parameter validation

White, Corey N. 03 August 2010 (has links)
No description available.
19

Improved Bayesian methods for detecting recombination and rate heterogeneity in DNA sequence alignments

Mantzaris, Alexander Vassilios January 2011 (has links)
DNA sequence alignments are usually not homogeneous. Mosaic structures may result as a consequence of recombination or rate heterogeneity. Interspecific recombination, in which DNA subsequences are transferred between different (typically viral or bacterial) strains may result in a change of the topology of the underlying phylogenetic tree. Rate heterogeneity corresponds to a change of the nucleotide substitution rate. Various methods for simultaneously detecting recombination and rate heterogeneity in DNA sequence alignments have recently been proposed, based on complex probabilistic models that combine phylogenetic trees with factorial hidden Markov models or multiple changepoint processes. The objective of my thesis is to identify potential shortcomings of these models and explore ways of how to improve them. One shortcoming that I have identified is related to an approximation made in various recently proposed Bayesian models. The Bayesian paradigm requires the solution of an integral over the space of parameters. To render this integration analytically tractable, these models assume that the vectors of branch lengths of the phylogenetic tree are independent among sites. While this approximation reduces the computational complexity considerably, I show that it leads to the systematic prediction of spurious topology changes in the Felsenstein zone, that is, the area in the branch lengths configuration space where maximum parsimony consistently infers the wrong topology due to long-branch attraction. I demonstrate these failures by using two Bayesian hypothesis tests, based on an inter- and an intra-model approach to estimating the marginal likelihood. I then propose a revised model that addresses these shortcomings, and demonstrate its improved performance on a set of synthetic DNA sequence alignments systematically generated around the Felsenstein zone. The core model explored in my thesis is a phylogenetic factorial hidden Markov model (FHMM) for detecting two types of mosaic structures in DNA sequence alignments, related to recombination and rate heterogeneity. The focus of my work is on improving the modelling of the latter aspect. Earlier research efforts by other authors have modelled different degrees of rate heterogeneity with separate hidden states of the FHMM. Their work fails to appreciate the intrinsic difference between two types of rate heterogeneity: long-range regional effects, which are potentially related to differences in the selective pressure, and the short-term periodic patterns within the codons, which merely capture the signature of the genetic code. I have improved these earlier phylogenetic FHMMs in two respects. Firstly, by sampling the rate vector from the posterior distribution with RJMCMC I have made the modelling of regional rate heterogeneity more flexible, and I infer the number of different degrees of divergence directly from the DNA sequence alignment, thereby dispensing with the need to arbitrarily select this quantity in advance. Secondly, I explicitly model within-codon rate heterogeneity via a separate rate modification vector. In this way, the within-codon effect of rate heterogeneity is imposed on the model a priori, which facilitates the learning of the biologically more interesting effect of regional rate heterogeneity a posteriori. I have carried out simulations on synthetic DNA sequence alignments, which have borne out my conjecture. The existing model, which does not explicitly include the within-codon rate variation, has to model both effects with the same modelling mechanism. As expected, it was found to fail to disentangle these two effects. On the contrary, I have found that my new model clearly separates within-codon rate variation from regional rate heterogeneity, resulting in more accurate predictions.
20

Determinantes da adesão a tratados de patentes, 1970-2000: a Convenção de Paris e o Tratado de Cooperação de patentes / The determinants of the accession of the accession of patent treaties, 1970-2000: the Paris Convention and Patents Cooperation Treaty

Pereira Neto, Manoel Galdino 30 September 2011 (has links)
Neste trabalho investigamos os determinantes da adesão de países a dois tratados internacionais de patentes: A Convenção de Paris e o Tratado de Cooperação de Patentes (TCP). Por meio de um modelo hierárquico Bayesiano, apresentamos evidências de que fatores domésticos são importantes para predizer adesão aos tratados estudados. Porém, quais fatores são importantes dependem do tipo de tratado. Para o TCP, que é um tratado que visa reduzir custos de transação, a legislação doméstica de patentes não é relevante. Para a Convenção de Paris, que limita as opções de política na área de patente, a legislação doméstica é fator relevante. Nós mostramos também que os ganhos diretos de participar dos tratados, medido pelo número de patentes no exterior, é uma variável importante e positivamente associada à probabilidade de adesão a ambos os acordos. Apresentamos ainda evidências de que variáveis sistêmicas são importantes e que as mudanças no sistema internacional nos últimos 30 anos são fatores importantes para explicar a adesão. / In this paper we investigate the determinants of the accession of two international patent treaties: the Paris Convention and Patent Cooperation Treaty (PCT). Through a Bayesian hierarchical model, we present evidence that domestic factors are important in predicting accession to the treaties studied. However, what factors are important depends on the type of treaty. For TCP, which is a treaty aimed at reducing transaction costs, the domestic law of patents is not important. For the Paris Convention, which limits the options in the area of patent policy, domestic law is a relevant factor. We also show that the direct gains from participating in treaties, as measured by the number of patents abroad, is an important variable and positively associated with the likelihood of ratification to both agreements. We also present evidence that systemic variables are important and that changes in the international system over the past 30 years are important factors to explain the membership to the treaties.

Page generated in 0.0711 seconds