• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 152
  • 86
  • 54
  • 21
  • 10
  • 7
  • 4
  • 4
  • 4
  • 3
  • 2
  • 2
  • Tagged with
  • 412
  • 181
  • 87
  • 86
  • 78
  • 78
  • 77
  • 70
  • 65
  • 58
  • 57
  • 56
  • 48
  • 43
  • 42
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
41

Impact du choix de la fonction de perte en segmentation d'images et application à un modèle de couleurs

Poirier, Louis-François January 2006 (has links)
Mémoire numérisé par la Direction des bibliothèques de l'Université de Montréal.
42

Methods for Bayesian inversion of seismic data

Walker, Matthew James January 2015 (has links)
The purpose of Bayesian seismic inversion is to combine information derived from seismic data and prior geological knowledge to determine a posterior probability distribution over parameters describing the elastic and geological properties of the subsurface. Typically the subsurface is modelled by a cellular grid model containing thousands or millions of cells within which these parameters are to be determined. Thus such inversions are computationally expensive due to the size of the parameter space (being proportional to the number of grid cells) over which the posterior is to be determined. Therefore, in practice approximations to Bayesian seismic inversion must be considered. A particular, existing approximate workflow is described in this thesis: the so-called two-stage inversion method explicitly splits the inversion problem into elastic and geological inversion stages. These two stages sequentially estimate the elastic parameters given the seismic data, and then the geological parameters given the elastic parameter estimates, respectively. In this thesis a number of methodologies are developed which enhance the accuracy of this approximate workflow. To reduce computational cost, existing elastic inversion methods often incorporate only simplified prior information about the elastic parameters. Thus a method is introduced which transforms such results, obtained using prior information specified using only two-point geostatistics, into new estimates containing sophisticated multi-point geostatistical prior information. The method uses a so-called deep neural network, trained using only synthetic instances (or `examples') of these two estimates, to apply this transformation. The method is shown to improve the resolution and accuracy (by comparison to well measurements) of elastic parameter estimates determined for a real hydrocarbon reservoir. It has been shown previously that so-called mixture density network (MDN) inversion can be used to solve geological inversion analytically (and thus very rapidly and efficiently) but only under certain assumptions about the geological prior distribution. A so-called prior replacement operation is developed here, which can be used to relax these requirements. It permits the efficient MDN method to be incorporated into general stochastic geological inversion methods which are free from the restrictive assumptions. Such methods rely on the use of Markov-chain Monte-Carlo (MCMC) sampling, which estimate the posterior (over the geological parameters) by producing a correlated chain of samples from it. It is shown that this approach can yield biased estimates of the posterior. Thus an alternative method which obtains a set of non-correlated samples from the posterior is developed, avoiding the possibility of bias in the estimate. The new method was tested on a synthetic geological inversion problem; its results compared favourably to those of Gibbs sampling (a MCMC method) on the same problem, which exhibited very significant bias. The geological prior information used in seismic inversion can be derived from real images which bear similarity to the geology anticipated within the target region of the subsurface. Such so-called training images are not always available from which this information (in the form of geostatistics) may be extracted. In this case appropriate training images may be generated by geological experts. However, this process can be costly and difficult. Thus an elicitation method (based on a genetic algorithm) is developed here which obtains the appropriate geostatistics reliably and directly from a geological expert, without the need for training images. 12 experts were asked to use the algorithm (individually) to determine the appropriate geostatistics for a physical (target) geological image. The majority of the experts were able to obtain a set of geostatistics which were consistent with the true (measured) statistics of the target image.
43

Heterogeneidad de estados en Hidden Markov models

Padilla Pérez, Nicolás January 2014 (has links)
Magíster en Gestión de Operaciones / Ingeniero Civil Industrial / Hidden Markov models (HMM) han sido ampliamente usados para modelar comportamientos dinámicos tales como atención del consumidor, navegación en internet, relación con el cliente, elección de productos y prescripción de medicamentos por parte de los médicos. Usualmente, cuando se estima un HMM simultáneamente para todos los clientes, los parámetros del modelo son estimados asumiendo el mismo número de estados ocultos para cada cliente. Esta tesis busca estudiar la validez de este supuesto identificando si existe un potencial sesgo en la estimación cuando existe heterogeneidad en el número de estados. Para estudiar el potencial sesgo se realiza un extenso ejercicio de simulación de Monte Carlo. En particular se estudia: a) si existe o no sesgo en la estimación de parámetros, b) qué factores aumentan o disminuyen el sesgo, y c) qué métodos pueden ser usados para estimar correctamente el modelo cuando existe heterogeneidad en el número de estados. En el ejercicio de simulación, se generan datos utilizando un HMM con dos estados para el 50% de clientes y un HMM con tres estados para el 50% restante. Luego, se utiliza un procedimiento MCMC jerárquico Bayesiano para estimar los parámetros de un HMM con igual número de estados para todos los clientes. En cuanto a la existencia de sesgo, los resultados muestran que los parámetros a nivel individual son recuperados correctamente, sin embargo los parámetros a nivel agregado correspondientes a la distribución de heterogeneidad de los parámetros individuales deben ser reportados cuidadosamente. Esta dificultad es generada por la mezcla de dos segmentos de clientes con distinto comportamiento. En cuanto los factores que afectan el sesgo, los resultados muestran que: 1) cuando la proporción de clientes con dos estados aumenta, el sesgo de los resultados agregados también aumenta; 2) cuando se incorpora heterogeneidad en las probabilidades condicionales, se generan estados duplicados para los clientes con 2 estados y los estados no representan lo mismo para todos los clientes, incrementando el sesgo a nivel agregado; y 3) cuando el intercepto de las probabilidades condicionales es heterogéneo, incorporar variables exógenas puede ayudar a identificar los estados igualmente para todos los clientes. Para reducir los problemas mencionados se proponen dos enfoques. Primero, usar una mezcla de Gaussianas como distribución a priori para capturar heterogeneidad multimodal, y segundo usar un modelo de clase latente con HMMs de distintos número de estados para cada clase. El primer modelo ayuda en representar de mejor forma los resultados agregados. Sin embargo, el modelo no evita que existan estados duplicados para los clientes con menos estados. El segundo modelo captura la heterogeneidad en el número de estados, identificando correctamente el comportamiento a nivel agregado y evitando estados duplicados para clientes con dos estados. Finalmente, esta tesis muestra que en la mayoría de los casos estudiados, el supuesto de un número fijo de estados no genera sesgo a nivel individual cuando se incorpora heterogeneidad. Esto ayuda a mejorar la estimación, sin embargo se deben tomar precauciones al realizar conclusiones usando los resultados agregados.
44

Incorporating high-dimensional exposure modelling into studies of air pollution and health

Liu, Yi January 2015 (has links)
Air pollution is an important determinant of health. There is convincing, and growing, evidence linking the risk of disease, and premature death, with exposure to various pollutants including fine particulate matter and ozone. Knowledge about the health and environmental risks and their trends is important stimulus for developing environmental and public health policy. In order to perform studies into the risks of environmental hazards on human health study there is a requirement for accurate estimates of exposures that might be experienced by the populations at risk. In this thesis we develop spatio-temporal models within a Bayesian framework to obtain accurate estimates of such exposures. These models are set within a hierarchical framework in a Bayesian setting with different levels describing dependencies over space and time. Considering the complexity of hierarchical models and the large amounts of data that can arise from environmental networks mean that inference using Markov Chain Monte Carlo (MCMC) may be computational challenging in this setting. We use both MCMC and Integrated Nested Laplace Approximations (INLA) to implement spatio-temporal exposure models when dealing with high–dimensional data. We also propose an approach for utilising the results from exposure models in health models which allows them to enhance studies of the health effects of air pollution. Moreover, we investigate the possible effects of preferential sampling, where monitoring sites in environmental networks are preferentially located by the designers in order to assess whether guideline and policies are being adhered to. This means the data arising from such networks may not accurately characterise the spatial-temporal field they intend to monitor and as such will not provide accurate estimates of the exposures that are potentially experienced by populations. This has the potential to introduce bias into estimates of risk associated with exposure to air pollution and subsequent health impact analyses. Throughout the thesis, the methods developed are assessed using simulation studies and applied to real–life case studies assessing the effects of particulate matter on health in Greater London and throughout the UK.
45

Bayesian Analysis of Crime Survey Data with Nonresponse

Liu, Shiao 26 April 2018 (has links)
Bayesian hierarchical models are effective tools for small area estimation by pooling small datasets together. The pooling procedures allow individual areas to “borrow strength” from each other to desirably improve the estimation. This work is an extension of Nandram and Choi (2002), NC, to perform inference on finite population proportions when there exists non-identifiability of the missing pattern for nonresponse in binary survey data. We review the small-area selection model (SSM) in NC which is able to incorporate the non-identifiability. Moreover, the proposed SSM, together with the individual-area selection model (ISM), and the small-area pattern-mixture model (SPM) are evaluated by real crime data in Stasny (1991). Furthermore, the methodology is compared to ISM and SPM using simulated small area datasets. Computational issues related to the MCMC are also discussed.
46

Effect fusion using model-based clustering

Malsiner-Walli, Gertraud, Pauger, Daniela, Wagner, Helga 01 April 2018 (has links) (PDF)
In social and economic studies many of the collected variables are measured on a nominal scale, often with a large number of categories. The definition of categories can be ambiguous and different classification schemes using either a finer or a coarser grid are possible. Categorization has an impact when such a variable is included as covariate in a regression model: a too fine grid will result in imprecise estimates of the corresponding effects, whereas with a too coarse grid important effects will be missed, resulting in biased effect estimates and poor predictive performance. To achieve an automatic grouping of the levels of a categorical covariate with essentially the same effect, we adopt a Bayesian approach and specify the prior on the level effects as a location mixture of spiky Normal components. Model-based clustering of the effects during MCMC sampling allows to simultaneously detect categories which have essentially the same effect size and identify variables with no effect at all. Fusion of level effects is induced by a prior on the mixture weights which encourages empty components. The properties of this approach are investigated in simulation studies. Finally, the method is applied to analyse effects of high-dimensional categorical predictors on income in Austria.
47

Deterioration model for ports in the Republic of Korea using Markov chain Monte Carlo with multiple imputation

Jeon, Juncheol January 2019 (has links)
Condition of infrastructure is deteriorated over time as it gets older. It is the deterioration model that predicts how and when facilities will deteriorate over time. In most infrastructure management system, the deterioration model is a crucial element. Using the deterioration model, it is very helpful to estimate when repair will be carried out, how much will be needed for the maintenance of the entire facilities, and what maintenance costs will be required during the life cycle of the facility. However, the study of deterioration model for civil infrastructures of ports is still in its infancy. In particular, there is almost no related research in South Korea. Thus, this study aims to develop a deterioration model for civil infrastructure of ports in South Korea. There are various methods such as Deterministic, Stochastic, and Artificial Intelligence to develop deterioration model. In this research, Markov model using Markov chain theory, one of the Stochastic methods, is used to develop deterioration model for ports in South Korea. Markov chain is a probabilistic process among states. i.e., in Markov chain, transition among states follows some probability which is called as the transition probability. The key process of developing Markov model is to find this transition probability. This process is called calibration. In this study, the existing methods, Optimization method and Markov Chain Monte Carlo (MCMC), are reviewed, and methods to improve for these are presented. In addition, in this study, only a small amount of data are used, which causes distortion of the model. Thus, supplement techniques are presented to overcome the small size of data. In order to address the problem of the existing methods and the lack of data, the deterioration model developed by the four calibration methods: Optimization, Optimization with Bootstrap, MCMC (Markov Chain Monte Carlo), and MCMC with Multiple imputation, are finally proposed in this study. In addition, comparison between four models are carried out and good performance model is proposed. This research provides deterioration model for port in South Korea, and more accurate calibration technique is suggested. Furthermore, the method of supplementing insufficient data has been combined with existing calibration techniques.
48

Peptide Identification: Refining a Bayesian Stochastic Model

Acquah, Theophilus Barnabas Kobina 01 May 2017 (has links)
Notwithstanding the challenges associated with different methods of peptide identification, other methods have been explored over the years. The complexity, size and computational challenges of peptide-based data sets calls for more intrusion into this sphere. By relying on the prior information about the average relative abundances of bond cleavages and the prior probability of any specific amino acid sequence, we refine an already developed Bayesian approach in identifying peptides. The likelihood function is improved by adding additional ions to the model and its size is driven by two overall goodness of fit measures. In the face of the complexities associated with our posterior density, a Markov chain Monte Carlo algorithm coupled with simulated annealing is used to simulate candidate choices from the posterior distribution of the peptide sequence, where the peptide with the largest posterior density is estimated as the true peptide.
49

Water Budget Analysis and Groundwater Inverse Modeling

Farid Marandi, Sayena 2012 May 1900 (has links)
The thesis contains two studies: First is the water budget analysis using the groundwater modeling and next is the groundwater modeling using the MCMC scheme. The case study for the water budget analysis was the Norman Landfill site in Oklahoma with a quite complex hydrology. This site contains a wetland that controls the groundwater-surface water interaction. This study reports a simulation study for better understanding of the local water balance at the landfill site using MODFLOW-2000. Inputs to the model are based on local climate, soil, geology, vegetation and seasonal hydrological dynamics of the system to determine the groundwater-surface water interaction, water balance components in various hydrologic reservoirs, and the complexity and seasonality of local/regional hydrological processes. The model involved a transient two- dimensional hydrogeological simulation of the multi-layered aquifer. In the second part of the thesis, a Markov Chain Monte Carlo (MCMC) method were developed to estimate the hydraulic conductivity field conditioned on the measurements of hydraulic conductivity and hydraulic head for saturated flow in randomly heterogeneous porous media. The groundwater modeling approach was found to be efficient in identifying the dominant hydrological processes at the Norman Landfill site including evapotranspiration, recharge, and regional groundwater flow and groundwater-surface water interaction. The MCMC scheme also proved to be a robust tool for the inverse groundwater modeling but its strength depends on the precision of the prior covariance matrix.
50

Conjoint Analysis Using Mixed Effect Models

Frühwirth-Schnatter, Sylvia, Otter, Thomas January 1999 (has links) (PDF)
Following the pioneering work of Allenby and Ginter (1995) and Lenk et al.(1994); we propose in Section 2 a mixed effect model allowing for fixed and random effects as possible statistical solution to the problems mentioned above. Parameter estimation using a new, efficient variant of a Markov Chain Monte Carlo method will be discussed in Section 3 together with problems of model comparison techniques in the context of random effect models. Section 4 presents an application of the former to a brand-price trade-off study from the Austrian mineral water market. (author's abstract) / Series: Forschungsberichte / Institut für Statistik

Page generated in 0.0407 seconds