Spelling suggestions: "subject:"nongaussian"" "subject:"notgaussian""
151 |
Utilisation de simulateurs multi-fidélité pour les études d'incertitudes dans les codes de caclul / Assessment of uncertainty in computer experiments when working with multifidelity simulators.Zertuche, Federico 08 October 2015 (has links)
Les simulations par ordinateur sont un outil de grande importance pour les mathématiciens appliqués et les ingénieurs. Elles sont devenues plus précises mais aussi plus compliquées. Tellement compliquées, que le temps de lancement par calcul est prohibitif. Donc, plusieurs aspects de ces simulations sont mal compris. Par exemple, souvent ces simulations dépendent des paramètres qu'ont une valeur inconnue.Un metamodèle est une reconstruction de la simulation. Il produit des réponses proches à celles de la simulation avec un temps de calcul très réduit. Avec ce metamodèle il est possible d'étudier certains aspects de la simulation. Il est construit avec peu de données et son objectif est de remplacer la simulation originale.Ce travail est concerné avec la construction des metamodèles dans un cadre particulier appelé multi-fidélité. En multi-fidélité, le metamodèle est construit à partir des données produites par une simulation objective et des données qu'ont une relation avec cette simulation. Ces données approximées peuvent être générés par des versions dégradées de la simulation ; par des anciennes versions qu'ont été largement étudiées ou par une autre simulation dans laquelle une partie de la description est simplifiée.En apprenant la différence entre les données il est possible d'incorporer l'information approximée et ce ci peut nous conduire vers un metamodèle amélioré. Deux approches pour atteindre ce but sont décrites dans ce manuscrit : la première est basée sur des modèles avec des processus gaussiens et la seconde sur une décomposition à base d'ondelettes. La première montre qu'en estimant la relation il est possible d'incorporer des données qui n'ont pas de valeur autrement. Dans la seconde, les données sont ajoutées de façon adaptative pour améliorer le metamodèle.L'objet de ce travail est d'améliorer notre compréhension sur comment incorporer des données approximées pour produire des metamodèles plus précis. Travailler avec un metamodèle multi-fidélité nous aide à comprendre en détail ces éléments. A la fin une image globale des parties qui forment ce metamodèle commence à s'esquisser : les relations et différences entres les données deviennent plus claires. / A very important tool used by applied mathematicians and engineers to model the behavior of a system are computer simulations. They have become increasingly more precise but also more complicated. So much, that they are very slow to produce an output and thus difficult to sample so that many aspects of these simulations are not very well understood. For example, in many cases they depend on parameters whose value isA metamodel is a reconstruction of the simulation. It requires much less time to produce an output that is close to what the simulation would. By using it, some aspects of the original simulation can be studied. It is built with very few samples and its purpose is to replace the simulation.This thesis is concerned with the construction of a metamodel in a particular context called multi-fidelity. In multi-fidelity the metamodel is constructed using the data from the target simulation along other samples that are related. These approximate samples can come from a degraded version of the simulation; an old version that has been studied extensively or a another simulation in which a part of the description is simplified.By learning the difference between the samples it is possible to incorporate the information of the approximate data and this may lead to an enhanced metamodel. In this manuscript two approaches that do this are studied: one based on Gaussian process modeling and another based on a coarse to fine Wavelet decomposition. The fist method shows how by estimating the relationship between two data sets it is possible to incorporate data that would be useless otherwise. In the second method an adaptive procedure to add data systematically to enhance the metamodel is proposed.The object of this work is to better our comprehension of how to incorporate approximate data to enhance a metamodel. Working with a multi-fidelity metamodel helps us to understand in detail the data that nourish it. At the end a global picture of the elements that compose it is formed: the relationship and the differences between all the data sets become clearer.
|
152 |
Metodologias de inserção de dados sob mecanismo de falta mnar para modelagem de teores em depósitos multivariados heterotópicosSilva, Camilla Zacché da January 2018 (has links)
Ao modelar-se depósitos minerais é comum enfrentarmos o problema de estimar múltiplos atributos possivelmente correlacionados, onde algumas variáveis são amostradas menos densamente do que outras. A falta de dados impõe um problema que requer atenção antes de qualquer modelagem subsequente. Precisamos, ao final, de modelos que sejam estatisticamente representativos. A maioria dos conjuntos de dados de problemas práticos são amostrados de maneira heterotópica e, para obter resultados coerentes, é preciso entender os motivos pelos quais alguns dados faltam e quais são os mecanismos que influenciaram a ausência de informações. A teoria de dados faltantes relaciona as amostras ausentes com aquelas medidas através de três mecanismos distintos: Faltante Completamente Aleatório (Missing Completely At Random - MCAR), Faltante Aleatório (Missing At Random - MAR) e Faltante Não Aleatório (Missing Not At Random - MNAR). O último mecanismo é extremamente complexo e a literatura recomenda ser tratado inicialmente como um mecanismo MAR. E após uma transformação fixa deve ser aplicada aos valores complementados para que estes se transformem em valores MNAR Embora existam métodos estatísticos clássicos para lidar com dados faltantes, tais abordagens ignoram a correlação espacial, uma característica que ocorre naturalmente em dados geológicos. A metodologia adequada para tratar com a falta de dados geológicos é a atualização bayesiana, em que se inserem valores sob mecanismo MAR considerando a correlação espacial. No presente estudo, a atualização bayesiana foi combinada com transformações fixas para tratar o mecanismo de falta de dados MNAR em dados geológicos. A transformação fixa aqui empregada é baseada no erro de inserção gerado em um cenário MAR no conjunto de dados. Assim, com o conjunto completo resultante foi utilizado em uma simulação sequencial gaussiana dos teores de uma base de dados multivariada, apresentando resultados satisfatórios, superiores aos obtidos por meio da cossimulação sequencial gaussiana, não inserindo qualquer viés no modelo final. / When modeling mineral deposits, it is common to face the problem of estimating multiple attributes possibly correlated where some variables are more densely sampled then others. Missing data imposes a problem that requires attention prior to any subsequent modeling. The later requires estimation models statistically representative. Most practical data sets are often heterotopically sampled, and to obtain coherent results one must understand the reasons why there are missing data and what are the mechanisms that cause the absence of information. The theory of missing data relates the missing samples to those measured through three different mechanisms: Missing Completely At Random (MCAR), Missing At Random (MAR), and Missing Not At Random (MNAR). The last mechanism is quite complex to deal with, and the literature recommends being treated as a MAR mechanism and after a fixed transform should be applied to the imputed values so that these turn into MNAR imputed values. Even though there are classical statistical methods to deal with missing data, such approaches ignore spatial correlation, a feature that occurs naturally in geological data. The adequate methodology to deal with missing geologic data is Bayesian Updating, which approaches the MAR mechanism and accounts for spatial correlation. In the present study, bayesian updating was used combined with fixed transforms to treat MNAR missing data mechanism in geologic data. The fixed transform herein used is based on the error of MAR imputation on the data set. The resulting complete set was then used on a sequential gaussian simulation of the grades on a multivariate data set, presenting satisfactory results, superior to those obtained through sequential gaussian cossimulation, not inserting any biases on the final model.
|
153 |
Os Inteiros Gaussianos via MatrizesBarbosa, Fabrício de Paula Farias 23 October 2015 (has links)
Submitted by ANA KARLA PEREIRA RODRIGUES (anakarla_@hotmail.com) on 2017-08-28T13:01:20Z
No. of bitstreams: 1
arquivototal.pdf: 553092 bytes, checksum: 60c2a1a060ead1662c0a4edc6ec82f9c (MD5) / Approved for entry into archive by Viviane Lima da Cunha (viviane@biblioteca.ufpb.br) on 2017-08-28T15:55:23Z (GMT) No. of bitstreams: 1
arquivototal.pdf: 553092 bytes, checksum: 60c2a1a060ead1662c0a4edc6ec82f9c (MD5) / Made available in DSpace on 2017-08-28T15:55:23Z (GMT). No. of bitstreams: 1
arquivototal.pdf: 553092 bytes, checksum: 60c2a1a060ead1662c0a4edc6ec82f9c (MD5)
Previous issue date: 2015-10-23 / Coordenação de Aperfeiçoamento de Pessoal de Nível Superior - CAPES / Our study aims to present a special category of numbers, the Gaussian integers,
their properties and operations, have an overview about these numbers, their history
and emergence. We will also study Gaussian prime numbers, their properties and
application in matrix language representation of 2 x 2 type. / Nosso estudo tem como objetivo apresentar uma categoria especial de números,
os inteiros Gaussianos, suas propriedades e operações, ter uma visão geral sobre
esses números, sua história e surgimento. Também estudaremos números primos
Gaussianos, suas propriedades e aplicação com representação em linguagem matricial
do tipo 2 x 2.
|
154 |
Metodologias de inserção de dados sob mecanismo de falta mnar para modelagem de teores em depósitos multivariados heterotópicosSilva, Camilla Zacché da January 2018 (has links)
Ao modelar-se depósitos minerais é comum enfrentarmos o problema de estimar múltiplos atributos possivelmente correlacionados, onde algumas variáveis são amostradas menos densamente do que outras. A falta de dados impõe um problema que requer atenção antes de qualquer modelagem subsequente. Precisamos, ao final, de modelos que sejam estatisticamente representativos. A maioria dos conjuntos de dados de problemas práticos são amostrados de maneira heterotópica e, para obter resultados coerentes, é preciso entender os motivos pelos quais alguns dados faltam e quais são os mecanismos que influenciaram a ausência de informações. A teoria de dados faltantes relaciona as amostras ausentes com aquelas medidas através de três mecanismos distintos: Faltante Completamente Aleatório (Missing Completely At Random - MCAR), Faltante Aleatório (Missing At Random - MAR) e Faltante Não Aleatório (Missing Not At Random - MNAR). O último mecanismo é extremamente complexo e a literatura recomenda ser tratado inicialmente como um mecanismo MAR. E após uma transformação fixa deve ser aplicada aos valores complementados para que estes se transformem em valores MNAR Embora existam métodos estatísticos clássicos para lidar com dados faltantes, tais abordagens ignoram a correlação espacial, uma característica que ocorre naturalmente em dados geológicos. A metodologia adequada para tratar com a falta de dados geológicos é a atualização bayesiana, em que se inserem valores sob mecanismo MAR considerando a correlação espacial. No presente estudo, a atualização bayesiana foi combinada com transformações fixas para tratar o mecanismo de falta de dados MNAR em dados geológicos. A transformação fixa aqui empregada é baseada no erro de inserção gerado em um cenário MAR no conjunto de dados. Assim, com o conjunto completo resultante foi utilizado em uma simulação sequencial gaussiana dos teores de uma base de dados multivariada, apresentando resultados satisfatórios, superiores aos obtidos por meio da cossimulação sequencial gaussiana, não inserindo qualquer viés no modelo final. / When modeling mineral deposits, it is common to face the problem of estimating multiple attributes possibly correlated where some variables are more densely sampled then others. Missing data imposes a problem that requires attention prior to any subsequent modeling. The later requires estimation models statistically representative. Most practical data sets are often heterotopically sampled, and to obtain coherent results one must understand the reasons why there are missing data and what are the mechanisms that cause the absence of information. The theory of missing data relates the missing samples to those measured through three different mechanisms: Missing Completely At Random (MCAR), Missing At Random (MAR), and Missing Not At Random (MNAR). The last mechanism is quite complex to deal with, and the literature recommends being treated as a MAR mechanism and after a fixed transform should be applied to the imputed values so that these turn into MNAR imputed values. Even though there are classical statistical methods to deal with missing data, such approaches ignore spatial correlation, a feature that occurs naturally in geological data. The adequate methodology to deal with missing geologic data is Bayesian Updating, which approaches the MAR mechanism and accounts for spatial correlation. In the present study, bayesian updating was used combined with fixed transforms to treat MNAR missing data mechanism in geologic data. The fixed transform herein used is based on the error of MAR imputation on the data set. The resulting complete set was then used on a sequential gaussian simulation of the grades on a multivariate data set, presenting satisfactory results, superior to those obtained through sequential gaussian cossimulation, not inserting any biases on the final model.
|
155 |
Apprentissage de graphes structuré et parcimonieux dans des données de haute dimension avec applications à l’imagerie cérébrale / Structured Sparse Learning on Graphs in High-Dimensional Data with Applications to NeuroImagingBelilovsky, Eugene 02 March 2018 (has links)
Cette thèse présente de nouvelles méthodes d’apprentissage structuré et parcimonieux sur les graphes, ce qui permet de résoudre une large variété de problèmes d’imagerie cérébrale, ainsi que d’autres problèmes en haute dimension avec peu d’échantillon. La première partie de cette thèse propose des relaxation convexe de pénalité discrète et combinatoriale impliquant de la parcimonie et bounded total variation d’un graphe, ainsi que la bounded `2. Ceux-ci sont dévelopé dansle but d’apprendre un modèle linéaire interprétable et on démontre son efficacacité sur des données d’imageries cérébrales ainsi que sur les problèmes de reconstructions parcimonieux.Les sections successives de cette thèse traite de la découverte de structure sur des modèles graphiques “undirected” construit à partir de peu de données. En particulier, on se concentre sur des hypothèses de parcimonie et autres hypothèses de structures dans les modèles graphiques gaussiens. Deux contributions s’en dégagent. On construit une approche pour identifier les différentes entre des modèles graphiques gaussiens (GGMs) qui partagent la même structure sous-jacente. On dérive la distribution de différences de paramètres sous une pénalité jointe quand la différence des paramètres est parcimonieuse. On montre ensuite comment cette approche peut être utilisée pour obtenir des intervalles de confiances sur les différences prises par le GGM sur les arêtes. De là, on introduit un nouvel algorithme d’apprentissage lié au problème de découverte de structure sur les modèles graphiques non dirigées des échantillons observés. On démontre que les réseaux de neurones peuvent être utilisés pour apprendre des estimateurs efficacaces de ce problèmes. On montre empiriquement que ces méthodes sont une alternatives flexible et performantes par rapport aux techniques existantes. / This dissertation presents novel structured sparse learning methods on graphs that address commonly found problems in the analysis of neuroimaging data as well as other high dimensional data with few samples. The first part of the thesis proposes convex relaxations of discrete and combinatorial penalties involving sparsity and bounded total variation on a graph as well as bounded `2 norm. These are developed with the aim of learning an interpretable predictive linear model and we demonstrate their effectiveness on neuroimaging data as well as a sparse image recovery problem.The subsequent parts of the thesis considers structure discovery of undirected graphical models from few observational data. In particular we focus on invoking sparsity and other structured assumptions in Gaussian Graphical Models (GGMs). To this end we make two contributions. We show an approach to identify differences in Gaussian Graphical Models (GGMs) known to have similar structure. We derive the distribution of parameter differences under a joint penalty when parameters are known to be sparse in the difference. We then show how this approach can be used to obtain confidence intervals on edge differences in GGMs. We then introduce a novel learning based approach to the problem structure discovery of undirected graphical models from observational data. We demonstrate how neural networks can be used to learn effective estimators for this problem. This is empirically shown to be flexible and efficient alternatives to existing techniques.
|
156 |
Modeling The Output From Computer Experiments Having Quantitative And Qualitative Input Variables And Its ApplicationsHan, Gang 10 December 2008 (has links)
No description available.
|
157 |
Fundamental Limits of Communication Channels under Non-Gaussian InterferenceLe, Anh Duc 04 October 2016 (has links)
No description available.
|
158 |
Risk aggregation and capital allocation using copulas / Martinette VenterVenter, Martinette January 2014 (has links)
Banking is a risk and return business; in order to obtain the desired returns, banks are required to take on risks. Following the demise of Lehman Brothers in September 2008, the Basel III Accord proposed considerable increases in capital charges for banks. Whilst this ensures greater economic stability, banks now face an increasing risk of becoming capital inefficient. Furthermore, capital analysts are not only required to estimate capital requirements for individual business lines, but also for the organization as a whole. Copulas are a popular technique to model joint multi-dimensional problems, as they can be applied as a mechanism that models relationships among multivariate distributions. Firstly, a review of the Basel Capital Accord will be provided. Secondly, well known risk measures as proposed under the Basel Accord will be investigated. The penultimate chapter is dedicated to the theory of copulas as well as other measures of dependence. The final chapter presents a practical illustration of how business line losses can be simulated by using the Gaussian, Cauchy, Student t and Clayton copulas in order to determine capital requirements using 95% VaR, 99% VaR, 95% ETL, 99% ETL and StressVaR. The resultant capital estimates will always be a function of the choice of copula, the choice of risk measure and the correlation inputs into the copula calibration algorithm. The choice of copula, the choice of risk measure and the conservativeness of correlation inputs will be determined by the organization’s risk appetite. / Sc (Applied Mathematics), North-West University, Potchefstroom Campus, 2014
|
159 |
Risk aggregation and capital allocation using copulas / Martinette VenterVenter, Martinette January 2014 (has links)
Banking is a risk and return business; in order to obtain the desired returns, banks are required to take on risks. Following the demise of Lehman Brothers in September 2008, the Basel III Accord proposed considerable increases in capital charges for banks. Whilst this ensures greater economic stability, banks now face an increasing risk of becoming capital inefficient. Furthermore, capital analysts are not only required to estimate capital requirements for individual business lines, but also for the organization as a whole. Copulas are a popular technique to model joint multi-dimensional problems, as they can be applied as a mechanism that models relationships among multivariate distributions. Firstly, a review of the Basel Capital Accord will be provided. Secondly, well known risk measures as proposed under the Basel Accord will be investigated. The penultimate chapter is dedicated to the theory of copulas as well as other measures of dependence. The final chapter presents a practical illustration of how business line losses can be simulated by using the Gaussian, Cauchy, Student t and Clayton copulas in order to determine capital requirements using 95% VaR, 99% VaR, 95% ETL, 99% ETL and StressVaR. The resultant capital estimates will always be a function of the choice of copula, the choice of risk measure and the correlation inputs into the copula calibration algorithm. The choice of copula, the choice of risk measure and the conservativeness of correlation inputs will be determined by the organization’s risk appetite. / Sc (Applied Mathematics), North-West University, Potchefstroom Campus, 2014
|
160 |
Efficient feature detection using OBAloG: optimized box approximation of Laplacian of GaussianJakkula, Vinayak Reddy January 1900 (has links)
Master of Science / Department of Electrical and Computer Engineering / Christopher L. Lewis / This thesis presents a novel approach for detecting robust and scale invariant interest points in images. The detector accurately and efficiently approximates the Laplacian of Gaussian using an optimal set of weighted box filters that take advantage of integral images to reduce computations. When combined with state-of-the art descriptors for matching, the algorithm performs better than leading feature tracking algorithms including SIFT and SURF in terms of speed and accuracy.
|
Page generated in 0.038 seconds