• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 152
  • 86
  • 54
  • 21
  • 10
  • 7
  • 4
  • 4
  • 4
  • 3
  • 2
  • 2
  • Tagged with
  • 412
  • 181
  • 87
  • 86
  • 78
  • 78
  • 77
  • 70
  • 65
  • 58
  • 57
  • 56
  • 48
  • 43
  • 42
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
341

Inferência bayesiana para testes acelerados "step-stress" com dados de falha sob censura e distribuição Gama / Bayesian inference for accelerated testing "step-stress" with fault data under censorship and Gamma

Chagas, Karlla Delalibera [UNESP] 16 April 2018 (has links)
Submitted by Karlla Delalibera Chagas null (karlladelalibera@gmail.com) on 2018-05-14T12:25:13Z No. of bitstreams: 1 dissertação - Karlla Delalibera.pdf: 2936984 bytes, checksum: 3d99ddd54b4c7d3230e5de9070915594 (MD5) / Approved for entry into archive by Claudia Adriana Spindola null (claudia@fct.unesp.br) on 2018-05-14T12:53:09Z (GMT) No. of bitstreams: 1 chagas_kd_me_prud.pdf: 2936984 bytes, checksum: 3d99ddd54b4c7d3230e5de9070915594 (MD5) / Made available in DSpace on 2018-05-14T12:53:09Z (GMT). No. of bitstreams: 1 chagas_kd_me_prud.pdf: 2936984 bytes, checksum: 3d99ddd54b4c7d3230e5de9070915594 (MD5) Previous issue date: 2018-04-16 / Pró-Reitoria de Pós-Graduação (PROPG UNESP) / Neste trabalho iremos realizar uma abordagem sobre a modelagem de dados que advém de um teste acelerado. Consideraremos o caso em que a carga de estresse aplicada foi do tipo "step-stress". Para a modelagem, utilizaremos os modelos step-stress simples e múltiplo sob censura tipo II e censura progressiva tipo II, e iremos supor que os tempos de vida dos itens em teste seguem uma distribuição Gama. Além disso, também será utilizado o modelo step-stress simples sob censura tipo II considerando a presença de riscos competitivos. Será realizada uma abordagem clássica, por meio do método de máxima verossimilhança e uma abordagem Bayesiana usando prioris não-informativas, para estimar os parâmetros dos modelos. Temos como objetivo realizar a comparação dessas duas abordagens por meio de simulações para diferentes tamanhos amostrais e utilizando diferentes funções de perda (Erro Quadrático, Linex, Entropia), e através de estatísticas verificaremos qual desses métodos se aproxima mais dos verdadeiros valores dos parâmetros. / In this work, we will perform an approach to data modeling that comes from an accelerated test. We will consider the case where the stress load applied was of the step-stress type. For the modeling, we will use the simple and multiple step-stress models under censorship type II and progressive censorship type II, and we will assume that the lifetimes of the items under test follow a Gamma distribution. In addition, the simple step-stress model under censorship type II will also be used considering the presence of competitive risks. A classical approach will be performed, using the maximum likelihood method and a Bayesian approach using non-informative prioris, to estimate the parameters of the models. We aim to compare these two approaches by simulations for different sample sizes and using different loss functions (Quadratic Error, Linex, Entropy), and through statistics, we will check which of these approaches is closer to the true values of the parameters.
342

Um modelo espaço-temporal bayesiano para medir a interação social na criminalidade : simulações e evidências na Região Metropolitana de São Paulo

Gazzano, Marcelo January 2008 (has links)
Neste trabalho utilizamos um modelo espaço-temporal proposto em Rojas (2004) para medir a interação social da criminalidade na região metropolitana de São Paulo. Realizamos simulações de Monte Carlo para testar a capacidade de estimação do modelo em diferentes cenários. Observamos que a estimação melhora com o aumento de observações ao longo do tempo. Já os resultados empíricos indicam que a região metropolitana de São Paulo é um hot spot no estado, pois é encontrado um maior grau de interação social no índice de homicídio em relação aos índices de roubo e furto. / In this paper we employ a spatio-temporal model proposed in Rojas (2004) to evaluate the social interaction in crime in São Paulo metropolitan area. We carry out Monte Carlo simulations to test the model estimation capability in different scenarios. We notice that the estimation gets better as the number of observations in time raises. The results point out that São Paulo metropolitan area is a hot spot in the state since we found out a greater social interaction for the homicide index, compared to robbery and thievery.
343

Imputação múltipla: comparação e eficiência em experimentos multiambientais / Multiple Imputations: comparison and efficiency of multi-environmental trials

Maria Joseane Cruz da Silva 19 July 2012 (has links)
Em experimentos de genótipos ambiente são comuns à presença de valores ausentes, devido à quantidade insuficiente de genótipos para aplicação dificultando, por exemplo, o processo de recomendação de genótipos mais produtivos, pois para a aplicação da maioria das técnicas estatísticas multivariadas exigem uma matriz de dados completa. Desta forma, aplicam-se métodos que estimam os valores ausentes a partir dos dados disponíveis conhecidos como imputação de dados (simples e múltiplas), levando em consideração o padrão e o mecanismo de dados ausentes. O objetivo deste trabalho é avaliar a eficiência da imputação múltipla livre da distribuição (IMLD) (BERGAMO et al., 2008; BERGAMO, 2007) comparando-a com o método de imputação múltipla com Monte Carlo via cadeia de Markov (IMMCMC), na imputação de unidades ausentes presentes em experimentos de interação genótipo (25) ambiente (7). Estes dados são provenientes de um experimento aleatorizado em blocos com a cultura de Eucaluptus grandis (LAVORANTI, 2003), os quais foram feitas retiradas de porcentagens aleatoriamente (10%, 20%, 30%) e posteriormente imputadas pelos métodos considerados. Os resultados obtidos por cada método mostraram que, a eficiência relativa em ambas as porcentagens manteve-se acima de 90%, sendo menor para o ambiente (4) quando imputado com a IMLD. Para a medida geral de exatidão, a medida que ocorreu acréscimo de dados em falta, foi maior ao imputar os valores ausentes com a IMMCMC, já para o método IMLD estes valores variaram sendo menor a 20% de retirada aleatória. Dentre os resultados encontrados, é de suma importância considerar o fato de que o método IMMCMC considera a suposição de normalidade, já o método IMLD leva vantagem sobre este ponto, pois não considera restrição alguma sobre a distribuição dos dados nem sobre os mecanismos e padrões de ausência. / In trials of genotypes by environment, the presence of absent values is common, due to the quantity of insufficiency of genotype application, making difficult for example, the process of recommendation of more productive genotypes, because for the application of the majority of the multivariate statistical techniques, a complete data matrix is required. Thus, methods that estimate the absent values from available data, known as imputation of data (simple and multiple) are applied, taking into consideration standards and mechanisms of absent data. The goal of this study is to evaluate the efficiency of multiple imputations free of distributions (IMLD) (BERGAMO et al., 2008; BERGAMO, 2007), compared with the Monte Carlo via Markov chain method of multiple imputation (IMMCMC), in the absent units present in trials of genotype interaction (25)environment (7). This data is provisional of random tests in blocks with Eucaluptus grandis cultures (LAVORANTI, 2003), of which random percentages of withdrawals (10%, 20%, 30%) were performed, with posterior imputation of the considered methods. The results obtained for each method show that, the relative efficiency in both percentages were maintained above 90%, being less for environmental (4) when imputed with an IMLD. The general measure of exactness, the measures where higher absent data occurred, was larger when absent values with an IMMCMC was imputed, as for the IMLD method, the varied absent values were lower at 20% for random withdrawals. Among results found, it is of sum importance to take into consideration the fact that the IMMCMC method considers it to be an assumption of normality, as for the IMLD method, it does not consider any restriction on the distribution of data, not on mechanisms and absent standards, which is an advantage on imputations.
344

Mapeamento de QTLs utilizando as abordagens Clássica e Bayesiana / Mapping QTLs: Classical and Bayesian approaches

Elisabeth Regina de Toledo 02 October 2006 (has links)
A produção de grãos e outros caracteres de importância econômica para a cultura do milho, tais como a altura da planta, o comprimento e o diâmetro da espiga, apresentam herança poligênica, o que dificulta a obtenção de informações sobre as bases genéticas envolvidas na variação desses caracteres. Associações entre marcadores e QTLs foram analisadas através dos métodos de mapeamento por intervalo composto (CIM) e mapeamento por intervalo Bayesiano (BIM). A partir de um conjunto de dados de produção de grãos, referentes à avaliação de 256 progênies de milho genotipadas para 139 marcadores moleculares codominantes, verificou-se que as metodologias apresentadas permitiram classificar marcas associadas a QTLs. Através do procedimento CIM, associações entre marcadores e QTLs foram consideradas significativas quando o valor da estatística de razão de verossimilhança (LR) ao longo do cromossomo atingiu o valor máximo dentre os que ultrapassaram o limite crítico LR = 11; 5 no intervalo considerado. Dez QTLs foram mapeados distribuídos em três cromossomos. Juntos, explicaram 19,86% da variância genética. Os tipos de interação alélica predominantes foram de dominância parcial (quatro QTLs) e dominância completa (três QTLs). O grau médio de dominância calculado foi de 1,12, indicando grau médio de dominância completa. Grande parte dos alelos favoráveis ao caráter foram provenientes da linhagem parental L0202D, que apresentou mais elevada produção de grãos. Adotando-se a abordagem Bayesiana, foram implementados métodos de amostragem através de cadeias de Markov (MCMC), para obtenção de uma amostra da distribuição a posteriori dos parâmetros de interesse, incorporando as crenças e incertezas a priori. Resumos sobre as localizações dos QTLs e seus efeitos aditivo e de dominância foram obtidos. Métodos MCMC com saltos reversíveis (RJMCMC) foram utilizados para a análise Bayesiana e Fator calculado de Bayes para estimar o número de QTLs. Através do método BIM associações entre marcadores e QTLs foram consideradas significativas em quatro cromossomos, com um total de cinco QTLs mapeados. Juntos, esses QTLs explicaram 13,06% da variância genética. A maior parte dos alelos favoráveis ao caráter também foram provenientes da linhagem parental L02-02D. / Grain yield and other important economic traits in maize, such as plant heigth, stalk length, and stalk diameter, exhibit polygenic inheritance, making dificult information achievement about the genetic bases related to the variation of these traits. The number and sites of (QTLs) loci that control grain yield in maize have been estimated. Associations between markers and QTLs were undertaken by composite interval mapping (CIM) and Bayesian interval mapping (BIM). Based on a set of grain yield data, obtained from the evaluation of 256 maize progenies genotyped for 139 codominant molecular markers, the presented methodologies allowed classification of markers associated to QTLs.Through composite interval mapping were significant when value of likelihood ratio (LR) throughout the chromosome surpassed LR = 11; 5. Significant associations between markers and QTLs were obtained in three chromosomes, ten QTLs has been mapped, which explained 19; 86% of genetic variation. Predominant genetic action for mapped QTLs was partial dominance and (four QTLs) complete dominance (tree QTLs). Average dominance amounted to 1,12 and confirmed complete dominance for grain yield. Most alleles that contributed positively in trait came from parental strain L0202D. The latter had the highest yield rate. Adopting a Bayesian approach to inference, usually implemented via Markov chain Monte Carlo (MCMC). The output of a Bayesian analysis is a posterior distribution on the parameters, fully incorporating prior beliefs and parameter uncertainty. Reversible Jump MCMC (RJMCMC) is used in this work. Bayes Factor is used to estimate the number of QTL. Through Bayesian interval, significant associations between markers and QTLs were obtained in four chromosomes and five QTLs has been mapped, which explained 13; 06% of genetic variation. Most alleles that contributed positively in trait came from parental strain L02-02D. The latter had the highest yield rate.
345

A markov chain monte carlo method for inverse stochastic modeling and uncertainty assessment

Fu, Jianlin 07 May 2008 (has links)
Unlike the traditional two-stage methods, a conditional and inverse-conditional simulation approach may directly generate independent, identically distributed realizations to honor both static data and state data in one step. The Markov chain Monte Carlo (McMC) method was proved a powerful tool to perform such type of stochastic simulation. One of the main advantages of the McMC over the traditional sensitivity-based optimization methods to inverse problems is its power, flexibility and well-posedness in incorporating observation data from different sources. In this work, an improved version of the McMC method is presented to perform the stochastic simulation of reservoirs and aquifers in the framework of multi-Gaussian geostatistics. First, a blocking scheme is proposed to overcome the limitations of the classic single-component Metropolis-Hastings-type McMC. One of the main characteristics of the blocking McMC (BMcMC) scheme is that, depending on the inconsistence between the prior model and the reality, it can preserve the prior spatial structure and statistics as users specified. At the same time, it improves the mixing of the Markov chain and hence enhances the computational efficiency of the McMC. Furthermore, the exploration ability and the mixing speed of McMC are efficiently improved by coupling the multiscale proposals, i.e., the coupled multiscale McMC method. In order to make the BMcMC method capable of dealing with the high-dimensional cases, a multi-scale scheme is introduced to accelerate the computation of the likelihood which greatly improves the computational efficiency of the McMC due to the fact that most of the computational efforts are spent on the forward simulations. To this end, a flexible-grid full-tensor finite-difference simulator, which is widely compatible with the outputs from various upscaling subroutines, is developed to solve the flow equations and a constant-displacement random-walk particle-tracking method, which enhances the com / Fu, J. (2008). A markov chain monte carlo method for inverse stochastic modeling and uncertainty assessment [Tesis doctoral no publicada]. Universitat Politècnica de València. https://doi.org/10.4995/Thesis/10251/1969 / Palancia
346

Efficacité des distributions instrumentales en équilibre dans un algorithme de type Metropolis-Hastings

Boisvert-Beaudry, Gabriel 08 1900 (has links)
Dans ce mémoire, nous nous intéressons à une nouvelle classe de distributions instrumentales informatives dans le cadre de l'algorithme Metropolis-Hastings. Ces distributions instrumentales, dites en équilibre, sont obtenues en ajoutant de l'information à propos de la distribution cible à une distribution instrumentale non informative. Une chaîne de Markov générée par une distribution instrumentale en équilibre est réversible par rapport à la densité cible sans devoir utiliser une probabilité d'acceptation dans deux cas extrêmes: le cas local lorsque la variance instrumentale tend vers 0 et le cas global lorsqu'elle tend vers l'infini. Il est nécessaire d'approximer les distributions instrumentales en équilibre afin de pouvoir les utiliser en pratique. Nous montrons que le cas local mène au Metropolis-adjusted Langevin algorithm (MALA), tandis que le cas global mène à une légère modification du MALA. Ces résultats permettent de concevoir un nouvel algorithme généralisant le MALA grâce à l'ajout d'un nouveau paramètre. En fonction de celui-ci, l'algorithme peut utiliser l'équilibre local ou global ou encore une interpolation entre ces deux cas. Nous étudions ensuite la paramétrisation optimale de cet algorithme en fonction de la dimension de la distribution cible sous deux régimes: le régime asymptotique puis le régime en dimensions finies. Diverses simulations permettent d'illustrer les résultats théoriques obtenus. De plus, une application du nouvel algorithme à un problème de régression logistique bayésienne permet de comparer son efficacité à des algorithmes existants. Les résultats obtenus sont satisfaisants autant d'un point de vue théorique que computationnel. / In this master's thesis, we are interested in a new class of informed proposal distributions for Metropolis-Hastings algorithms. These new proposals, called balanced proposals, are obtained by adding information about the target density to an uninformed proposal distribution. A Markov chain generated by a balanced proposal is reversible with respect to the target density without the need for an acceptance probability in two extreme cases: the local case, where the proposal variance tends to zero, and the global case, where it tends to infinity. The balanced proposals need to be approximated to be used in practice. We show that the local case leads to the Metropolis-adjusted Langevin algorithm (MALA), while the global case leads to a small modification of the MALA. These results are used to create a new algorithm that generalizes the MALA by adding a new parameter. Depending on the value of this parameter, the new algorithm will use a locally balanced proposal, a globally balanced proposal, or an interpolation between these two cases. We then study the optimal choice for this parameter as a function of the dimension of the target distribution under two regimes: the asymptotic regime and a finite-dimensional regime. Simulations are presented to illustrate the theoretical results. Finally, we apply the new algorithm to a Bayesian logistic regression problem and compare its efficiency to existing algorithms. The results are satisfying on a theoretical and computational standpoint.
347

Data-driven test case design of automatic test cases using Markov chains and a Markov chain Monte Carlo method / Datadriven testfallsdesign av automatiska testfall med Markovkedjor och en Markov chain Monte Carlo-metod

Lindahl, John, Persson, Douglas January 2021 (has links)
Large and complex software that is frequently changed leads to testing challenges. It is well established that the later a fault is detected in software development, the more it costs to fix. This thesis aims to research and develop a method of generating relevant and non-redundant test cases for a regression test suite, to catch bugs as early in the development process as possible. The research was executed at Axis Communications AB with their products and systems in mind. The approach utilizes user data to dynamically generate a Markov chain model and with a Markov chain Monte Carlo method, strengthen that model. The model generates test case proposals, detects test gaps, and identifies redundant test cases based on the user data and data from a test suite. The sampling in the Markov chain Monte Carlo method can be modified to bias the model for test coverage or relevancy. The model is generated generically and can therefore be implemented in other API-driven systems. The model was designed with scalability in mind and further implementations can be made to increase the complexity and further specialize the model for individual needs.
348

Recursive-RANSAC: A Novel Algorithm for Tracking Multiple Targets in Clutter

Niedfeldt, Peter C. 02 July 2014 (has links) (PDF)
Multiple target tracking (MTT) is the process of identifying the number of targets present in a surveillance region and the state estimates, or track, of each target. MTT remains a challenging problem due to the NP-hard data association step, where unlabeled measurements are identified as either a measurement of an existing target, a new target, or a spurious measurement called clutter. Existing techniques suffer from at least one of the following drawbacks: divergence in clutter, underlying assumptions on the number of targets, high computational complexity, time-consuming implementation, poor performance at low detection rates, and/or poor track continuity. Our goal is to develop an efficient MTT algorithm that is simple yet effective and that maintains track continuity enabling persistent tracking of an unknown number of targets. A related field to tracking is regression analysis, where the parameters of static signals are estimated from a batch or a sequence of data. The random sample consensus (RANSAC) algorithm was developed to mitigate the effects of spurious measurements, and has since found wide application within the computer vision community due to its robustness and efficiency. The main concept of RANSAC is to form numerous simple hypotheses from a batch of data and identify the hypothesis with the most supporting measurements. Unfortunately, RANSAC is not designed to track multiple targets using sequential measurements.To this end, we have developed the recursive-RANSAC (R-RANSAC) algorithm, which tracks multiple signals in clutter without requiring prior knowledge of the number of existing signals. The basic premise of the R-RANSAC algorithm is to store a set of RANSAC hypotheses between time steps. New measurements are used to either update existing hypotheses or generate new hypotheses using RANSAC. Storing multiple hypotheses enables R-RANSAC to track multiple targets. Good tracks are identified when a sufficient number of measurements support a hypothesis track. The complexity of R-RANSAC is shown to be squared in the number of measurements and stored tracks, and under moderate assumptions R-RANSAC converges in mean to the true states. We apply R-RANSAC to a variety of simulation, camera, and radar tracking examples.
349

ESSAYS ON SCALABLE BAYESIAN NONPARAMETRIC AND SEMIPARAMETRIC MODELS

Chenzhong Wu (18275839) 29 March 2024 (has links)
<p dir="ltr">In this thesis, we delve into the exploration of several nonparametric and semiparametric econometric models within the Bayesian framework, highlighting their applicability across a broad spectrum of microeconomic and macroeconomic issues. Positioned in the big data era, where data collection and storage expand at an unprecedented rate, the complexity of economic questions we aim to address is similarly escalating. This dual challenge ne- cessitates leveraging increasingly large datasets, thereby underscoring the critical need for designing flexible Bayesian priors and developing scalable, efficient algorithms tailored for high-dimensional datasets.</p><p dir="ltr">The initial two chapters, Chapter 2 and 3, are dedicated to crafting Bayesian priors suited for environments laden with a vast array of variables. These priors, alongside their corresponding algorithms, are optimized for computational efficiency, scalability to extensive datasets, and, ideally, distributability. We aim for these priors to accommodate varying levels of dataset sparsity. Chapter 2 assesses nonparametric additive models, employing a smoothing prior alongside a band matrix for each additive component. Utilizing the Bayesian backfitting algorithm significantly alleviates the computational load. In Chapter 3, we address multiple linear regression settings by adopting a flexible scale mixture of normal priors for coefficient parameters, thus allowing data-driven determination of the necessary amount of shrinkage. The use of a conjugate prior enables a closed-form solution for the posterior, markedly enhancing computational speed.</p><p dir="ltr">The subsequent chapters, Chapter 4 and 5, pivot towards time series dataset model- ing and Bayesian algorithms. A semiparametric modeling approach dissects the stochastic volatility in macro time series into persistent and transitory components, the latter addi- tional component addressing outliers. Utilizing a Dirichlet process mixture prior for the transitory part and a collapsed Gibbs sampling algorithm, we devise a method capable of efficiently processing over 10,000 observations and 200 variables. Chapter 4 introduces a simple univariate model, while Chapter 5 presents comprehensive Bayesian VARs. Our al- gorithms, more efficient and effective in managing outliers than existing ones, are adept at handling extensive macro datasets with hundreds of variables.</p>
350

Assessment of Soil Corrosion in Underground Pipelines via Statistical Inference

Yajima, Ayako 10 September 2015 (has links)
No description available.

Page generated in 0.0609 seconds