Spelling suggestions: "subject:"inference"" "subject:"lnference""
471 |
Statistical Analysis and Bayesian Methods for Fatigue Life Prediction and Inverse Problems in Linear Time Dependent PDEs with UncertaintiesSawlan, Zaid A 10 November 2018 (has links)
This work employs statistical and Bayesian techniques to analyze mathematical forward models with several sources of uncertainty. The forward models usually arise from phenomenological and physical phenomena and are expressed through regression-based models or partial differential equations (PDEs) associated with uncertain parameters and input data. One of the critical challenges in real-world applications is to quantify uncertainties of the unknown parameters using observations. To this purpose, methods based on the likelihood function, and Bayesian techniques constitute the two main statistical inferential approaches considered here.
Two problems are studied in this thesis. The first problem is the prediction of fatigue life of metallic specimens. The second part is related to inverse problems in linear PDEs. Both problems require the inference of unknown parameters given certain measurements. We first estimate the parameters by means of the maximum likelihood approach. Next, we seek a more comprehensive Bayesian inference using analytical asymptotic approximations or computational techniques.
In the fatigue life prediction, there are several plausible probabilistic stress-lifetime (S-N) models. These models are calibrated given uniaxial fatigue experiments. To generate accurate fatigue life predictions, competing S-N models are ranked according to several classical information-based measures. A different set of predictive information criteria is then used to compare the candidate Bayesian models. Moreover, we propose a spatial stochastic model to generalize S-N models to fatigue crack initiation in general geometries. The model is based on a spatial Poisson process with an intensity function that combines the S-N curves with an averaged effective stress that is computed from the solution of the linear elasticity equations.
|
472 |
[en] ARTIFICIAL NEURAL NETWORK MODELING FOR QUALITY INFERENCE OF A POLYMERIZATION PROCESS / [pt] MODELO DE REDES NEURAIS ARTIFICIAIS PARA INFERÊNCIA DA QUALIDADE DE UM PROCESSO POLIMÉRICOJULIA LIMA FLECK 26 January 2009 (has links)
[pt] O presente trabalho apresenta o desenvolvimento de um
modelo neural para a inferência da qualidade do polietileno
de baixa densidade (PEBD) a partir dos valores das
variáveis de processo do sistema reacional. Para tal, fez-
se uso de dados operacionais de uma empresa petroquímica,
cujo pré-processamento incluiu a seleção de variáveis,
limpeza e normalização dos dados selecionados e
preparação dos padrões. A capacidade de inferência do
modelo neural desenvolvido neste estudo foi comparada com a
de dois modelos fenomenológicos existentes. Para tal,
utilizou-se como medida de desempenho o valor do erro
médio absoluto percentual dos modelos, tendo como
referência valores experimentais do índice de fluidez.
Neste contexto, o modelo neural apresentou-se
como uma eficiente ferramenta de modelagem da qualidade do
sistema reacional de produção do PEBD. / [en] This work comprises the development of a neural network-
based model for quality inference of low density
polyethylene (LDPE). Plant data corresponding to
the process variables of a petrochemical company`s LDPE
reactor were used for model development. The data were
preprocessed in the following manner: first,
the most relevant process variables were selected, then
data were conditioned and normalized. The neural network-
based model was able to accurately predict the
value of the polymer melt index as a function of the
process variables. This model`s performance was compared
with that of two mechanistic models
developed from first principles. The comparison was made
through the models` mean absolute percentage error, which
was calculated with respect to experimental values of the
melt index. The results obtained confirm the neural
network model`s ability to infer values of quality-related
measurements of the LDPE reactor.
|
473 |
Understanding cellular differentiation by modelling of single-cell gene expression dataPapadopoulos, Nikolaos 08 August 2019 (has links)
No description available.
|
474 |
Essays on bivariate option pricing via copula and heteroscedasticity models: a classical and bayesian approach / Ensaios sobre precificação de opções bivariadas via cópulas e modelos heterocedásticos: abordagem clássica e bayesianaLopes, Lucas Pereira 15 February 2019 (has links)
This dissertation is composed of two main and independents essays, but complementary. In the first one, we discuss the option price under a bayesian perspective. This essay aims to price and analyze the fair price behavior of the call-on-max (bivariate) option considering marginal heteroscedastic models with dependence structure modeled via copulas. Concerning inference, we adopt a Bayesian perspective and computationally intensive methods based on Monte Carlo simulations via Markov Chain (MCMC). A simulation study examines the bias and the root mean squared errors of the posterior means for the parameters. Real stocks prices of Brazilian banks illustrate the approach. For the proposed method is verified the effects of strike and dependence structure on the fair price of the option. The results show that the prices obtained by our heteroscedastic model approach and copulas differ substantially from the prices obtained by the model derived from Black and Scholes. Empirical results are presented to argue the advantages of our strategy. In the second chapter, we consider the GARCH-in-mean models with asymmetric variance specifications to model the volatility of the assets-objects under the risk-neutral dynamics. Moreover, the copula functions model the joint distribution, with the objective of capturing non-linear, linear and tails associations between the assets. We aim to provide a methodology to realize a more realistic pricing option. To illustrate the methodology, we use stocks from two Brazilian companies, where our the modeling offered a proper fitting. Confronting the results obtained with the classic model, which is an extension of the Black and Scholes model, we note that considering constant volatility over time underpricing the options, especially in-the-money options. / Essa dissertação é composta por dois principais ensaios independentes e complementares. No primeiro discutimos a precificação de opções bivariadas sob uma perspectiva bayesiana. Neste ensaio o principal objetivo foi precificar e analizar o preço justo da opção bivariada call-onmax considerando modelos heterocedásticos para as marginais e a modelagem de dependência realizada por funções cópulas. Para a inferência, adotamos o método computacionalmente intensivo baseado em simulações Monte Carlo via Cadeia de Markov (MCMC). Um estudo de simulação examinou o viés e o erro quadrático médio dos parâmetros a posteriori. Para a ilustração da abordagem, foram utilizados preços de ações de bancos Brasileiros. Além disso, foi verificado o efeito do strike e da estrutura de dependência nos preços das opções. Os resultados mostraram que os preços obtidos pelo método utilizado difere substancialmente dos obtidos pelo modelo clássico derivado de Black e Scholes. No segundo capítulo, consideramos os modelos GARCH-in-mean com especificações assimétricas para a variância com o objetivo de acomodar as características da volatilidade dos ativos-objetos sob uma perspectiva da dinâmica do risco-neutro. Além do mais, as funções cópulas foram utilizadas para capturar as possíveis estruturas de dependência linear, não-linear e caudais entre os ativos. Para ilustrar a metodologia, utilizamos dados de duas companhias Brasileiras. Confrontando os resultados obtidos com o modelo clássico extendido de Black e Scholes, notamos que a premissa de volatilidade constante sub-precifica as opções bivariadas, especialmente dentro-do-dinheiro.
|
475 |
Avaliação da Sustentabilidade nas Universidades : uma proposta por meio da teoria dos conjuntos fuzzy /Piacitelli, Leni Palmira January 2019 (has links)
Orientador: Sandra Regina Monteiro Masalskiene Roveda / Resumo: A nova perspectiva rumo à conservação do meio ambiente como fato categórico de subsistência planetária tem colocado a sustentabilidade em primeiro plano como o grande desafio da universidade, responsável e equipada para a formação daqueles que terão o poder decisório sobre as questões relacionadas a um futuro viável. Este estudo se refere à sustentabilidade na universidade por meio do que é percebido pelos diversos atores que nela transitam. Teve como objetivo desvendar, em algumas instituições do setor público e do setor privado, quais as impressões que professores/coordenadores, alunos e funcionários possuem sobre as atuações da instituição em seu campus, os projetos e pesquisas voltados à sustentabilidade elaborados pela equipe docente e os aprendizados efetivos na formação dos novos profissionais, que deverão atuar nas diversas áreas de atividades em nossa sociedade. Para poder medir essas impressões, foram aplicados questionários e desenvolvido um modelo fuzzy com um índice associado, que apresenta o nível de sustentabilidade de uma Instituição de Ensino Superior – IES. Isso nos leva a concluir que os sistemas de inferência fuzzy são capazes de fazer uma avaliação do que pode ser percebido pela comunidade universitária sobre a sustentabilidade de sua instituição. / Doutor
|
476 |
Dismembering the Multi-Armed BanditTimothy J Keaton (6991049) 14 August 2019 (has links)
<div>The multi-armed bandit (MAB) problem refers to the task of sequentially assigning treatments to experimental units so as to identify the best treatment(s) while controlling the opportunity cost of further investigation. Many algorithms have been developed that attempt to balance this trade-off between exploiting the seemingly optimum treatment and exploring the other treatments. The selection of an MAB algorithm for implementation in a particular context is often performed by comparing candidate algorithms in terms of their abilities to control the expected regret of exploration versus exploitation. This singular criterion of mean regret is insufficient for many practical problems, and therefore an additional criterion that should be considered is control of the variance, or risk, of regret.</div><div>This work provides an overview of how the existing prominent MAB algorithms handle both criteria. We additionally investigate the effects of incorporating prior information into an algorithm's model, including how sharing information across treatments affects the mean and variance of regret.</div><div>A unified and accessible framework does not currently exist for constructing MAB algorithms that control both of these criteria. To this end, we develop such a framework based on the two elementary concepts of dismemberment of treatments and a designed learning phase prior to dismemberment. These concepts can be incorporated into existing MAB algorithms to effectively yield new algorithms that better control the expectation and variance of regret. We demonstrate the utility of our framework by constructing new variants of the Thompson sampler that involve a small number of simple tuning parameters. As we illustrate in simulation and case studies, these new algorithms are implemented in a straightforward manner and achieve improved control of both regret criteria compared to the traditional Thompson sampler. Ultimately, our consideration of additional criteria besides expected regret illuminates novel insights into the multi-armed bandit problem.</div><div>Finally, we present visualization methods, and a corresponding R Shiny app for their practical execution, that can yield insights into the comparative performances of popular MAB algorithms. Our visualizations illuminate the frequentist dynamics of these algorithms in terms of how they perform the exploration-exploitation trade-off over their populations of realizations as well as the algorithms' relative regret behaviors. The constructions of our visualizations facilitate a straightforward understanding of complicated MAB algorithms, so that our visualizations and app can serve as unique and interesting pedagogical tools for students and instructors of experimental design.</div>
|
477 |
Fisher's Randomization Test versus Neyman's Average Treatment TestGeorgii Hellberg, Kajsa-Lotta, Estmark, Andreas January 2019 (has links)
The following essay describes and compares Fisher's Randomization Test and Neyman's average treatment test, with the intention of concluding an easily understood blueprint for the comprehension of the practical execution of the tests and the conditions surrounding them. Focus will also be directed towards the tests' different implications on statistical inference and how the design of a study in relation to assumptions affects the external validity of the results. The essay is structured so that firstly the tests are presented and evaluated, then their different advantages and limitations are put against each other before they are applied to a data set as a practical example. Lastly the results obtained from the data set are compared in the Discussion section. The example used in this paper, which compares cigarette consumption after having treated one group with nicotine patches and another with fake nicotine patches, shows a decrease in cigarette consumption for both tests. The tests differ however, as the result from the Neyman test can be made valid for the population of interest. Fisher's test on the other hand only identifies the effect derived from the sample, consequently the test cannot draw conclusions about the population of heavy smokers. In short, the findings of this paper suggests that a combined use of the two tests would be the most appropriate way to test for treatment effect. Firstly one could use the Fisher test to check if any effect at all exist in the experiment, and then one could use the Neyman test to compensate the findings of the Fisher test, by estimating an average treatment effect for example.
|
478 |
Towards Machine Learning Inference in the Data PlaneLanglet, Jonatan January 2019 (has links)
Recently, machine learning has been considered an important tool for various networkingrelated use cases such as intrusion detection, flow classification, etc. Traditionally, machinelearning based classification algorithms run on dedicated machines that are outside of thefast path, e.g. on Deep Packet Inspection boxes, etc. This imposes additional latency inorder to detect threats or classify the flows.With the recent advance of programmable data planes, implementing advanced function-ality directly in the fast path is now a possibility. In this thesis, we propose to implementArtificial Neural Network inference together with flow metadata extraction directly in thedata plane of P4 programmable switches, routers, or Network Interface Cards (NICs).We design a P4 pipeline, optimize the memory and computational operations for our dataplane target, a programmable NIC with Micro-C external support. The results show thatneural networks of a reasonable size (i.e. 3 hidden layers with 30 neurons each) can pro-cess flows totaling over a million packets per second, while the packet latency impact fromextracting a total of 46 features is 1.85μs.
|
479 |
La compréhension d'histoires de littérature jeunesse chez l'enfant : quelle évolution en matière de production d'inférences émotionnelles et humoristiques entre 6 et 10 ans ? / Children's comprehension of stories : how children produce emotional and humorous inferences between 6 to 10 years ?Creissen, Sara 24 September 2015 (has links)
Deux objectifs centraux sont au cœur de cette thèse menée sur la compréhension de récits chez des enfants d'école élémentaire (i.e., de 6 ans à 10 ans). S'agissant du premier objectif, la compréhension des différentes facettes de la dimension émotionnelle d'une histoire (i.e., émotion désignée, expression comportementale de l'émotion et émotion à inférer) est examinée ainsi que le type d'informations émotionnelles que les enfants privilégient pour produire des inférences prédictives (i.e., c'est-à-dire la capacité à anticiper sur ce qu'il va se passer dans la suite d'une l'histoire). Deux supports de présentation des histoires de littérature jeunesse ont été comparés (i.e., auditif vs. audiovisuel). Trois principaux résultats sont à considérer. Premièrement les enfants de 6 ans ont le plus de difficulté à se représenter la dimension émotionnelle ainsi qu'à produire des inférences prédictives. Deuxièmement, les émotions à inférer sont difficilement représentées par rapport aux émotions explicitées dans l'histoire (i.e., émotions désignées et comportementales). Troisièmement, le caractère généralisable des habiletés de compréhension de la dimension émotionnelle a été confirmé. S'agissant du deuxième objectif, il consiste en l'étude de l'appréciation (i.e., c'est drôle/c'est non drôle) et de la compréhension (i.e., niveau d'interprétation de l'information humoristique) des informations humoristiques et non humoristiques des histoires de littérature jeunesse. Deux types d'informations humoristiques ont été considérés : l'humour de situation (i.e., comique de situation) et un humour plus complexe qui nécessite la production d'une inférence (i.e., interpréter un jeu de mots). Les principaux résultats montrent que les enfants discriminent mieux les évènements non humoristiques que les événements humoristiques. Pour les évènements humoristiques, ils apprécient mieux l'humour de situation que l'humour qui requiert la production d'une inférence et ce d'autant plus qu'ils sont jeunes. Aussi, la situation audiovisuelle favorise l'appréciation de l'humour alors que la situation auditive favorise la représentation des informations non humoristiques. Enfin, pour interpréter l'humour, les jeunes enfants favorisent des explications de haut niveau alors que les plus grands n'ont pas de préférence. Les résultats seront interprétés à la lumière de la littérature existante sur ces deux domaines d'investigation. / The aim of this thesis was double. First, we explored how children aged 6 to 10 years monitor and represent the emotional dimension of stories. Three types of emotional information were distinguished: emotional label, emotional behavior and emotion requiring an inference. The two first studies examined the representation of these types of emotional information usually encountered in natural stories both in auditory and audiovisual context. The third study focused on children's ability to use emotional information to make predictive inferences (i.e., children had to anticipate what would happened next in a story). The main results indicated that young children (i.e., 6 years old) encountered more difficulties to make emotional inferences and predictive inferences than the olders. Moreover, results showed that children more accurately represented emotional label and emotional behavior than emotional inference. Finally, the results revealed that the ability to understand the emotion was generalizable across different media. The second purpose of this thesis was to study how children aged 6 to 10 years identify, appreciate and understand humorous and nonhumorous passages in auditory and audiovisual natural stories. Two types of humorous information were considered: protagonist's humorous behaviors (i.e., explicit humor) and implicit humorous information that required the ability to make an inference. The main results revealed that children more easily identified nonhumorous passages compared to humorous ones. Furthermore, the audiovisual situation favored the identification of humorous passages and auditory situation promoted the identification of nonhumorous passages. Finally, to interpret humorous situations, young children use more often high explication level and old children use similarly high and low explication levels. These findings were discussed regarding literature.
|
480 |
Estimativa de componentes de (co)variância de características de crescimento na raça Brahman utilizando inferência bayesiana /Acevedo Jiménez, Efraín Enrique. January 2012 (has links)
Orientador: Humberto Tonhati / Coorientador: Rusbel Raúl Aspilcueta Borquis / Banca: João Ademir de Oliveira / Banca: Anibal Eugênio Vercesi Filho / Resumo: Este trabalho teve como objetivo estimar parâmetros genéticos para características de crescimento, utilizando-se um modelo multi-característica. Foram analisados registros de 14956 animais da raça Brahman, participantes do programa de melhoramento da raça Brahman, desenvolvido pela Associação Nacional de Criadores e Pesquisadores (ANCP). Por meio de inferência bayesiana foram obtidas estimativas de componentes de variância para os pesos nas idades padrão aos 60 (P60), 120 (P120), 210 (P210), 365 (P365), 450 (P450) e 550 (P550) dias. As análises foram realizadas empregando-se o software GIBBS2F90, assumindo um modelo animal. Para os pesos às idades padronizadas estudadas foram considerados os efeitos fixos de grupo de contemporâneos (rebanho - ano nascimento - estação de nascimento - sexo - manejo) e idade do animal no momento da pesagem (linear e quadrático) como covariável, e os efeitos aleatórios genético aditivo direto, genético aditivo materno e residual. As estimativas de herdabilidade genética direta foram 0,31 (P60), 0,37 (P120), 0,34 (P210), 0,38 (P365), 0,37 (P450) e 0,45 (P550). As estimativas de herdabilidade genética materna foram 0,18 (P60), 0,19 (P120), 0,22 (P210), 0,14 (P365), 0,11 (P450), e 0,08 (P550). Os valores de correlação genética direta variaram de 0,79 (P60 / P450) a 0,94 (P365 / P450). Em vista dos parâmetros estimados, verifica-se que as características aqui estudadas apresentam variabilidade genética suficiente para realizar a seleção dos animais. As correlações genéticas indicam que a seleção simultânea para as características em estudo pode ser eficiente / Abstract: The objective of this work was to estimate genetic parameters for growth traits using a multiple-traits model. Were analyzed records of 14956 animals of Brahman Breed, participants of Breeding Program of Brahman cattle, developed by Breeders and Researchers National Association (ANCP). Using Bayesian inference were obtained estimations of variance components for standardized weights at 60 (W60), 120 (W120), 210 (W210), 365 (W365), 450 (W450) and 550 (W550) days. Analyzes were done using Gibbs2f90 Software, assuming an animal model. For standardized weights were considered fixed effects of contemporary group (herd - birth year - birth season - sex - management) and animal age at weighting moment (linear and quadratic) as covariate, and randomized effects of direct additive genetic, maternal additive genetic and residual. Direct Heritability estimations were 0,31 (W60), 0,37 (W120), 0,34 (W210), 0,38 (W365), 0,37 (W450) and 0,45 (W550). Maternal heritability estimations were 0,18 (W60), 0,19 (W120), 0,22 (W210), 0,14 (W365), 0,11 (W450), and 0,08 (W550). Genetic correlation values varied from 0,79 (W60 / W450) a 0,94 (W365 / W450).. According estimated genetic parameters, was verified that standardized weights presented enough genetic variation for animal selection. Genetic correlations indicate that a simultaneously selection for all the traits of this study could be efficient / Mestre
|
Page generated in 0.0673 seconds