• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 47
  • 35
  • 9
  • 7
  • 6
  • 3
  • 1
  • Tagged with
  • 134
  • 134
  • 39
  • 25
  • 21
  • 21
  • 18
  • 15
  • 13
  • 12
  • 12
  • 12
  • 12
  • 12
  • 12
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
61

Exploring Model Fit and Methods for Measurement Invariance Concerning One Continuous or More Different Violators under Latent Variable Modeling

Liu, Yuanfang January 2022 (has links)
No description available.
62

High-performance and Scalable Bayesian Group Testing and Real-time fMRI Data Analysis

Chen, Weicong 27 January 2023 (has links)
No description available.
63

Correlation of Bivariate Frailty Models and a New Marginal Weibull Distribution for Correlated Bivariate Survival Data

Lin, Min 19 September 2011 (has links)
No description available.
64

Models for heterogeneous variable selection

Gilbride, Timothy J. 19 May 2004 (has links)
No description available.
65

Semiparametric Regression Methods with Covariate Measurement Error

Johnson, Nels Gordon 06 December 2012 (has links)
In public health, biomedical, epidemiological, and other applications, data collected are often measured with error. When mismeasured data is used in a regression analysis, not accounting for the measurement error can lead to incorrect inference about the relationships between the covariates and the response. We investigate measurement error in the covariates of two types of regression models.  For each we propose a fully Bayesian approach that treats the variable measured with error as a latent variable to be integrated over, and a semi-Bayesian approach which uses a first order Laplace approximation to marginalize the variable measured with error out of the likelihood. The first model is the matched case-control study for analyzing clustered binary outcomes. We develop low-rank thin plate splines for the case where a variable measured with error has an unknown, nonlinear relationship with the response. In addition to the semi- and fully Bayesian approaches, we propose another using expectation-maximization to detect both parametric and nonparametric relationships between the covariates and the binary outcome. We assess the performance of each method via simulation terms of mean squared error and mean bias. We illustrate each method on a perturbed example of 1--4 matched case-control study. The second regression model is the generalized linear model (GLM) with unknown link function. Usually, the link function is chosen by the user based on the distribution of the response variable, often to be the canonical link. However, when covariates are measured with error, incorrect inference as a result of the error can be compounded by incorrect choice of link function. We assess performance via simulation of the semi- and fully Bayesian methods in terms of mean squared error. We illustrate each method on the Framingham Heart Study dataset. The simulation results for both regression models support that the fully Bayesian approach is at least as good as the semi-Bayesian approach for adjusting for measurement error, particularly when the distribution of the variable of measure with error and the distribution of the measurement error are misspecified. / Ph. D.
66

Efficient Bayesian methods for mixture models with genetic applications / Métodos Bayesianos eficientes para modelos de mistura com aplicações em genética

Zuanetti, Daiane Aparecida 14 December 2016 (has links)
We propose Bayesian methods for selecting and estimating different types of mixture models which are widely used inGenetics and MolecularBiology. We specifically propose data-driven selection and estimation methods for a generalized mixture model, which accommodates the usual (independent) and the first-order (dependent) models in one framework, and QTL (quantitativetrait locus) mapping models for independent and pedigree data. For clustering genes through a mixture model, we propose three nonparametric Bayesian methods: a marginal nested Dirichlet process (NDP), which is able to cluster distributions and, a predictive recursion clustering scheme (PRC) and a subset nonparametric Bayesian (SNOB) clustering algorithm for clustering bigdata. We analyze and compare the performance of the proposed methods and traditional procedures of selection, estimation and clustering in simulated and real datasets. The proposed methods are more flexible, improve the convergence of the algorithms and provide more accurate estimates in many situations. In addition, we propose methods for estimating non observable QTLs genotypes and missing parents and improve the Mendelian probability of inheritance of nonfounder genotype using conditional independence structures.We also suggest applying diagnostic measures to check the goodness of fit of QTLmappingmodels. / Nos propomos métodos Bayesianos para selecionar e estimar diferentes tipos de modelos de mistura que são amplamente utilizados em Genética e Biologia Molecular. Especificamente, propomos métodos direcionados pelos dados para selecionar e estimar um modelo de mistura generalizado, que descreve o modelo de mistura usual (independente) e o de primeira ordem numa mesma estrutura, e modelos de mapeamento de QTL com dados independentes e familiares. Para agrupar genes através de modelos de mistura, nos propomos três métodos Bayesianos não-paramétricos: o processo de Dirichlet aninhado que possibilita agrupamento de distribuições e, um algoritmo preditivo recursivo e outro Bayesiano não- paramétrico exato para agrupar dados de alta dimensão. Analisamos e comparamos o desempenho dos métodos propostos e dos procedimentos tradicionais de seleção e estimação de modelos e agrupamento de dados em conjuntos de dados simulados e reais. Os métodos propostos são mais flexíveis, aprimoram a convergência dos algoritmos e apresentam estimativas mais precisas em muitas situações. Além disso, nos propomos procedimentos para estimar o genótipo não observável dos QTL se de pais faltantes e melhorar a probabilidade Mendeliana de herança genética do genótipo dos descendentes através da estrutura condicional de independência entre as variáveis. Também sugerimos aplicar medidas de diagnóstico para verificar a qualidade do ajuste dos modelos de mapeamento de QTLs.
67

Bayesian and information-theoretic tools for neuroscience

Endres, Dominik M. January 2006 (has links)
The overarching purpose of the studies presented in this report is the exploration of the uses of information theory and Bayesian inference applied to neural codes. Two approaches were taken: Starting from first principles, a coding mechanism is proposed, the results are compared to a biological neural code. Secondly, tools from information theory are used to measure the information contained in a biological neural code. Chapter 3: The REC model proposed by Harpur and Prager codes inputs into a sparse, factorial representation, maintaining reconstruction accuracy. Here I propose a modification of the REC model to determine the optimal network dimensionality. The resulting code for unfiltered natural images is accurate, highly sparse and a large fraction of the code elements show localized features. Furthermore, I propose an activation algorithm for the network that is faster and more accurate than a gradient descent based activation method. Moreover, it is demonstrated that asymmetric noise promotes sparseness. Chapter 4: A fast, exact alternative to Bayesian classification is introduced. Computational time is quadratic in both the number of observed data points and the number of degrees of freedom of the underlying model. As an example application, responses of single neurons from high-level visual cortex (area STSa) to rapid sequences of complex visual stimuli are analyzed. Chapter 5: I present an exact Bayesian treatment of a simple, yet sufficiently general probability distribution model. The model complexity, exact values of the expectations of entropies and their variances can be computed with polynomial effort given the data. The expectation of the mutual information becomes thus available, too, and a strict upper bound on its variance. The resulting algorithm is first tested on artificial data. To that end, an information theoretic similarity measure is derived. Second, the algorithm is demonstrated to be useful in neuroscience by studying the information content of the neural responses analyzed in the previous chapter. It is shown that the information throughput of STS neurons is maximized for stimulus durations of approx. 60ms.
68

Matematické metody konstrukce investičních portfolií / Mathematical methods of investment portfolios construction

Kůs, David January 2013 (has links)
This thesis describes statistical approaches of investment portfolio constructions. The theoretic part presents modern portfolio theory and specific statistical methods used to estimate expected revenue and risk of portfolio. These procedures are specifically selection method, modelling volatility using multivariate GARCH model, primarily DCC GARCH procedure and Bayes approach with Jeffrey's and conjugated density. The practical part of the thesis covers application of above mentioned statistical methods of investment portfolio constructions. The maximization of Sharp's ratio was chosen as optimization task. Researched portfolios are created from Austria Traded Index issues of shares where suitable time series of historical daily closed prices. Results attained within assembled portfolios in two year investment interval are later compared.
69

Three essays in applied macroeconometrics / Trois essais en macroéconométrie appliquée

Lhuissier, Stéphane 23 October 2014 (has links)
Cette thèse présente trois essais en macroéconométrie appliquée. Leur dénominateur commun est l’emploi conjoint de méthodes non-linéaires et bayesiennes afin de rendre compte de cycles économiques. Le choix de ces méthodes s’appuie sur deux constats fondamentaux. Premièrement, la plupart des séries temporelles macroéconomiques et financières présentent de soudains changements dans leur comportement résultant d’évènements tels que les crises financières, les changements brutaux de politiques fiscales et monétaires, l’alternance de phases d’expansion et de récession, etc. La prise en compte de ces changements discontinus et occasionnels nécessite une modélisation non-linéaire, c’est-à-dire la conception de modèles dont les paramètres évoluent au cours du temps. Deuxièmement, l’analyse économétrique moderne des modèles à vecteur autorégressif (VAR) et des modèles dynamiques et stochastiques d’équilibre général (DSGE) soulève de nombreux problèmes auxquels peut répondre un cadre bayesien. Tout d’abord, les modèles DSGE correspondent à une représentation partielle et simplifiée de la réalité, cette dernière étant généralement trop compliquée pour être formalisée ou trop coûteuse en termes de ressources computationnelles ou intellectuelles. Cette mauvaise spécification, inhérente aux modèles DSGE, s’ajoute en général à une pénurie de données informatives nécessaires à l’obtention de réponses précises. Dans un cadre bayesien, le praticien introduit une information supplémentaire, une distribution a priori, qui rend l’inférence des paramètres du modèle plus accessible aux macroéconomistes. S’agissant des modèles DSGE, la distribution a priori, construite à partir d’informations microéconomiques telles que les élasticités agrégées ou les taux de croissance moyens des variables macroéconomiques à long terme, permet de déplacer la fonction de vraisemblance du modèle dans les régions économiquement interprétables de l’espace de paramètres. Ceci, en vue de parvenir à une interprétation raisonnable des paramètres structurels du modèle, rendant ainsi l’inférence beaucoup plus précise. [...] / This dissertation presents three essays in applied macroeconometrics. Their common denominator is the use of Bayesian and non-linear methods to study of business cycle fluctuations. The first chapter of this dissertation revisits the issue of whether business cycles with financial crises are different, in the euro area since 1999. To do so, I fit a vector autoregression in which equation coefficients and structural disturbance variances are allowed to change over time according to Markov-switching processes. I show that financial crises have been characterized by changes not only in the variances of structural shocks, but also in the predictable and systematic part of the financial sector. By predictable and systematic part of the financial sector, I mean equation coefficients that describe the financial behavior of the system. I then examine the role of financial sector in financial crises and standard business-cycle fluctuations. The evidence indicates that the relative importance of financial shocks (“non-systematic part”) is significantly higher in periods of financial distress than in non-distress periods, but the transmission of these shocks to the economy appears linear over time. Counterfactual analyses suggest that the systematic part of financial sector accounted for up to 2 and 4 percentage points of output growth drops during the downturn in 2001-2003 and the two recessions, respectively. The second chapter examines the quantitative sources of changes in the macroeconomic volatility of the euro area since 1985. To do so, I estimate a variety of large-scale Dynamic Stochastic General Equilibrium (DSGE) models in which structural disturbance variances are allowed to change according to a Markov-switching process. The empirical results show that the best-fit model is one in which all shock variances are allowed to switch between a low- and a high-volatility regime, where regime changes in the volatilities of structural shocks are synchronized. The highvolatility regime was in place during the pre-euro period, while the low-volatility regime has been prevailed since the euro introduction. Although the size of different types of shock differs between the two shock regimes, their relative importance remains unchanged. Neutral technology shocks and shocks to the marginal efficiency of investment are the dominant sources of business cycle fluctuations. Moreover, the decline in the variance of investment shocks coincide remarkably well with the development of the European financial market that has increased access to credit by firms and households, suggesting that investment shocks reflect shocks originating in the financial system. [...]
70

Tomography of the Earth by Geo-Neutrino Emission / Tomografia da Terra pela emissão de geo-neutrinos

Tavares, Leonardo Estêvão Schendes 05 August 2014 (has links)
Geo-neutrinos are electronic anti-neutrinos originated from the beta decay process of some few elements in the decay chains of $^Th$ and $^U$ present in Earth\'s interior. Recent experimental measurements of these particles have been generating great expectations towards a new way for investigating directly the interior of the planet. It is a new multidisciplinary area, which might in the near future bring considerable clues about Earth\'s thermal dynamics and formation processes. In this work, we construct an inferential model based on the multigrid priors method to deal, in a generic way, with the geo-neutrino source reconstruction problem. It is an inverse problem; given a region in space V and a finite and small number of measurements of the potential generated on the surface of V by some charge distribution $ho$, we try to infer $ho$. We present examples of applications and analysis of models in two and three dimensions and we also comment how other a priori information may be included. Furthermore, we indicate the steps for inferring the best locations for future detectors. The objective is to maximize the amount of information liable to be obtained from experimental measurements. We resort to an entropic method of inference which may be applied right after the results of the multigrid method are obtained. / Geo-neutrinos são neutrinos provindos do decaimento beta de alguns poucos elementos nas cadeias de $^$Th e $^$U presentes no interior da Terra. Recentes medidas experimentais dessas partículas têm proporcionado grandes expectativas como uma nova maneira de se investigar o interior do planeta diretamente. Trata-se de uma área multidisciplinar nova, que poderá no futuro próximo nos trazer grandes esclarecimentos sobre a dinâmica térmica e o processo de formação da Terra. Neste trabalho, construímos um modelo de inferência baseado no método de multigrid de priors para tratar, de modo genérico, o problema da reconstrução das fontes de geo-neutrinos no interior da Terra. Trata-se de um problema inverso; dada uma região do espaço V e um número finito e pequeno de medidas do potencial gerado na superfície de V por uma distribuição de carga $ho$, tentamos inferir $ho$. Apresentamos exemplos de aplicações e análises do método em modelos bidimensionais e tridimensionais e também comentamos como outras informações a priori podem ser incorporadas. Além disso, indicamos os passos para se inferir onde detectores futuros devem ser posicionados. O objetivo é maximizar a informação passível de ser obtida das medidas experimentais. Utilizamos um método baseado em inferência entrópica e que pode ser aplicado diretamente depois que os resultados do método de multigrid são obtidos.

Page generated in 0.0731 seconds