• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 165
  • 30
  • 15
  • 10
  • 9
  • 6
  • 5
  • 4
  • 3
  • 3
  • 3
  • 3
  • 3
  • 3
  • 3
  • Tagged with
  • 293
  • 293
  • 143
  • 82
  • 59
  • 46
  • 46
  • 37
  • 32
  • 31
  • 31
  • 26
  • 24
  • 22
  • 21
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
221

Theoretical and empirical essays on microeconometrics

Possebom, Vitor Augusto 17 February 2016 (has links)
Submitted by Vitor Augusto Possebom (vitorapossebom@gmail.com) on 2016-03-08T00:06:59Z No. of bitstreams: 1 possebom_2016_masters-thesis.pdf: 905848 bytes, checksum: 1d5af42563617b7a8058b09baab1e040 (MD5) / Rejected by Letícia Monteiro de Souza (leticia.dsouza@fgv.br), reason: Prezado, Vítor, O seu trabalho foge totalmente das normas ABNT ou APA. Por gentileza, verificar trabalhos dos seus colegas postados na Biblioteca Digital para conhecimento. Qualquer dúvida, estou a disposição para falar ao telefone, onde fica mais fácil a comunicação. Atenciosamente, Letícia Monteiro 3799-3631 on 2016-03-08T11:53:56Z (GMT) / Submitted by Vitor Augusto Possebom (vitorapossebom@gmail.com) on 2016-03-08T20:40:10Z No. of bitstreams: 1 possebom_2016_masters-thesis.pdf: 926719 bytes, checksum: 6db5399b2d8e24a4b01f2bec748e4e95 (MD5) / Rejected by Letícia Monteiro de Souza (leticia.dsouza@fgv.br), reason: Prezado Vítor, Favor alterar o seu trabalho conforme as normas da ABNT. 1 - O Epigrafo deve constar na 5ª página, anteriormente a Dedicatória. 2 - Agradecimentos na 7ª página: Deve constar uma versão em português antes da versão em inglês. O título deve ser em caixa alta, centralizado e em negrito. 3 - Assim como em Agradecimentos, os títulos de: Resumo, Abstract e Sumário, devem ser em caixa alta, centralizado e em negrito. Estou a disposição para eventuais dúvidas. Atenciosamente, Letícia Monteiro 3799-3631 on 2016-03-09T12:15:54Z (GMT) / Submitted by Vitor Augusto Possebom (vitorapossebom@gmail.com) on 2016-03-10T00:21:36Z No. of bitstreams: 1 possebom_2016_masters-thesis.pdf: 933731 bytes, checksum: 69d467a1d6cb459ddd326d7fd593b4f9 (MD5) / Approved for entry into archive by Letícia Monteiro de Souza (leticia.dsouza@fgv.br) on 2016-03-10T11:59:00Z (GMT) No. of bitstreams: 1 possebom_2016_masters-thesis.pdf: 933731 bytes, checksum: 69d467a1d6cb459ddd326d7fd593b4f9 (MD5) / Made available in DSpace on 2016-03-10T12:48:54Z (GMT). No. of bitstreams: 1 possebom_2016_masters-thesis.pdf: 933731 bytes, checksum: 69d467a1d6cb459ddd326d7fd593b4f9 (MD5) Previous issue date: 2016-02-17 / This Master Thesis consists of one theoretical article and one empirical article on the field of Microeconometrics. The first chapter\footnote{We also thank useful suggestions by Marinho Bertanha, Gabriel Cepaluni, Brigham Frandsen, Dalia Ghanem, Ricardo Masini, Marcela Mello, Áureo de Paula, Cristine Pinto, Edson Severnini and seminar participants at São Paulo School of Economics, the California Econometrics Conference 2015 and the 37\textsuperscript{th} Brazilian Meeting of Econometrics.}, called \emph{Synthetic Control Estimator: A Generalized Inference Procedure and Confidence Sets}, contributes to the literature about inference techniques of the Synthetic Control Method. This methodology was proposed to answer questions involving counterfactuals when only one treated unit and a few control units are observed. Although this method was applied in many empirical works, the formal theory behind its inference procedure is still an open question. In order to fulfill this lacuna, we make clear the sufficient hypotheses that guarantee the adequacy of Fisher's Exact Hypothesis Testing Procedure for panel data, allowing us to test any \emph{sharp null hypothesis} and, consequently, to propose a new way to estimate Confidence Sets for the Synthetic Control Estimator by inverting a test statistic, the first confidence set when we have access only to finite sample, aggregate level data whose cross-sectional dimension may be larger than its time dimension. Moreover, we analyze the size and the power of the proposed test with a Monte Carlo experiment and find that test statistics that use the synthetic control method outperforms test statistics commonly used in the evaluation literature. We also extend our framework for the cases when we observe more than one outcome of interest (simultaneous hypothesis testing) or more than one treated unit (pooled intervention effect) and when heteroskedasticity is present. The second chapter, called \emph{Free Economic Area of Manaus: An Impact Evaluation using the Synthetic Control Method}, is an empirical article. We apply the synthetic control method for Brazilian city-level data during the 20\textsuperscript{th} Century in order to evaluate the economic impact of the Free Economic Area of Manaus (FEAM). We find that this enterprise zone had positive significant effects on Real GDP per capita and Services Total Production per capita, but it also had negative significant effects on Agriculture Total Production per capita. Our results suggest that this subsidy policy achieve its goal of promoting regional economic growth, even though it may have provoked mis-allocation of resources among economic sectors. / Esta dissertação de mestrado consiste em um artigo teórico e um artigo empírico no campo da Microeconometria. O primeiro capítulo contribui para a literatura sobre técnica de inferência do método de controle sintético. Essa metodologia foi proposta para responder a questões envolvendo contrafactuais quando apenas uma unidade tratada e poucas unidades controle são observadas. Apesar de esse método ter sido aplicado em muitos trabalhos empíricos, a teoria formal por trás de seu procedimento de inferência ainda é uma questão em aberto. Para preencher essa lacuna, nós deixamos claras hipóteses suficientes que garantem a validade do Procedimento Exato de Teste de Hipótese de Fisher para dados em painel, permitindo que nós testássemos qualquer hipótese nula do tipo \emph{sharp} e, consequentemente, que nós propuséssemos uma nova forma de estimar conjuntos de confiança para o Estimador de Controle Sintético por meio da inversão de uma estatística de teste, o primeiro conjunto de confiança quando temos acesso apenas a dados agregados cuja dimensão de \emph{cross-section} pode ser maior que a dimensão temporal. Ademais, nós analisamos o tamanho e o poder do teste proposto por meio de um experimento de Monte Carlo e encontramos que estatísticas de teste que usam o método de controle sintético apresentam uma performance superior àquela apresentada pelas estatísticas de teste comumente analisadas na literatura de avaliação de impacto. Nós também estendemos nosso procedimento para abarcar os casos em que observamos mais de uma variável de interesse (teste simultâneo de hipótese) ou mais de uma unidade tratada (efeito agregado da intervenção) e quando heterocedasticidade está presente. O segundo capítulo é um artigo empírico. Nós aplicamos o método de controle sintético a dados municipais brasileiros durante o século 20 com o intuito de avaliar o impacto econômico da Zona Franca de Manaus (ZFM). Nós encontramos que essa zona de empreendimento teve efeitos positivos significantes sobre o PIB Real per capita e sobre a Produção Total per capita do setor de Serviços, mas também teve um efeito negativo e significante sobre a Produção total per capita do setor Agrícola. Nossos resultados sugerem que essa política de subsídio alcançou seu objetivo de promover crescimento econômico regional, apesar de possivelmente ter provocado falhas de alocação de recursos entre setores econômicos.
222

Statistical properties of barycenters in the Wasserstein space and fast algorithms for optimal transport of measures / Propriétés statistiques du barycentre dans l’espace de Wasserstein

Cazelles, Elsa 21 September 2018 (has links)
Cette thèse se concentre sur l'analyse de données présentées sous forme de mesures de probabilité sur R^d. L'objectif est alors de fournir une meilleure compréhension des outils statistiques usuels sur cet espace muni de la distance de Wasserstein. Une première notion naturelle est l'analyse statistique d'ordre un, consistant en l'étude de la moyenne de Fréchet (ou barycentre). En particulier, nous nous concentrons sur le cas de données (ou observations) discrètes échantillonnées à partir de mesures de probabilité absolument continues (a.c.) par rapport à la mesure de Lebesgue. Nous introduisons ainsi un estimateur du barycentre de mesures aléatoires, pénalisé par une fonction convexe, permettant ainsi d'imposer son a.c. Un autre estimateur est régularisé par l'ajout d'entropie lors du calcul de la distance de Wasserstein. Nous nous intéressons notamment au contrôle de la variance de ces estimateurs. Grâce à ces résultats, le principe de Goldenshluger et Lepski nous permet d'obtenir une calibration automatique des paramètres de régularisation. Nous appliquons ensuite ce travail au recalage de densités multivariées, notamment pour des données de cytométrie de flux. Nous proposons également un test d'adéquation de lois capable de comparer deux distributions multivariées, efficacement en terme de temps de calcul. Enfin, nous exécutons une analyse statistique d'ordre deux dans le but d'extraire les tendances géométriques globales d'un jeu de donnée, c'est-à-dire les principaux modes de variations. Pour cela nous proposons un algorithme permettant d'effectuer une analyse en composantes principales géodésiques dans l'espace de Wasserstein. / This thesis focuses on the analysis of data in the form of probability measures on R^d. The aim is to provide a better understanding of the usual statistical tools on this space endowed with the Wasserstein distance. The first order statistical analysis is a natural notion to consider, consisting of the study of the Fréchet mean (or barycentre). In particular, we focus on the case of discrete data (or observations) sampled from absolutely continuous probability measures (a.c.) with respect to the Lebesgue measure. We thus introduce an estimator of the barycenter of random measures, penalized by a convex function, making it possible to enforce its a.c. Another estimator is regularized by adding entropy when computing the Wasserstein distance. We are particularly interested in controlling the variance of these estimators. Thanks to these results, the principle of Goldenshluger and Lepski allows us to obtain an automatic calibration of the regularization parameters. We then apply this work to the registration of multivariate densities, especially for flow cytometry data. We also propose a test statistic that can compare two multivariate distributions, efficiently in terms of computational time. Finally, we perform a second-order statistical analysis to extract the global geometric tendency of a dataset, also called the main modes of variation. For that purpose, we propose algorithms allowing to carry out a geodesic principal components analysis in the space of Wasserstein.
223

Die kerk en die sorggewers van VIGS-weeskinders

Strydom, Marina 01 January 2002 (has links)
Text in Afrikaans / Weens die veeleisende aard van sorggewing aan VIGS-weeskinders, bevind die sorggewers hulle dikwels in 'n posisie waar hulle self sorg en ondersteuning nodig het. Die vraag het begin ontstaan op watter manier hierdie sorggewers ondersteun kan word. Dit het duidelik geword dat die kerk vanuit hul sosiale verantwoordelikheid sorg en ondersteuning aan die sorggewers kan bied. Sorggewers van een instansie wat aan die navorsingsreis deelgeneem het, het inderdaad nie genoeg sorg en ondersteuning van die kerk ontvang nie. Hierdie gebrek aan ondersteuning het 'n direkte invloed op die sorggewers se hantering van sorggewingseise. Sorggewers van die ander twee deelnemende instansies ontvang genoeg ondersteuning van lidmate, en dit maak 'n groot verskil aan hoe sorggewingspanning beleef word. In hierdie studie is daar krities gekyk na wyses waarop die kerk betrokke is en verder kan betrokke raak by die sorggewers van VIGSweeskinders. / Philosophy, Practical and Systematic Theology / M.Th. (Praktiese Teologie)
224

MMD and Ward criterion in a RKHS : application to Kernel based hierarchical agglomerative clustering / Maximum Dean Discrepancy et critère de Ward dans un RKHS : application à la classification hierarchique à noyau

Li, Na 01 December 2015 (has links)
La classification non supervisée consiste à regrouper des objets afin de former des groupes homogènes au sens d’une mesure de similitude. C’est un outil utile pour explorer la structure d’un ensemble de données non étiquetées. Par ailleurs, les méthodes à noyau, introduites initialement dans le cadre supervisé, ont démontré leur intérêt par leur capacité à réaliser des traitements non linéaires des données en limitant la complexité algorithmique. En effet, elles permettent de transformer un problème non linéaire en un problème linéaire dans un espace de plus grande dimension. Dans ce travail, nous proposons un algorithme de classification hiérarchique ascendante utilisant le formalisme des méthodes à noyau. Nous avons tout d’abord recherché des mesures de similitude entre des distributions de probabilité aisément calculables à l’aide de noyaux. Parmi celles-ci, la maximum mean discrepancy a retenu notre attention. Afin de pallier les limites inhérentes à son usage, nous avons proposé une modification qui conduit au critère de Ward, bien connu en classification hiérarchique. Nous avons enfin proposé un algorithme itératif de clustering reposant sur la classification hiérarchique à noyau et permettant d’optimiser le noyau et de déterminer le nombre de classes en présence / Clustering, as a useful tool for unsupervised classification, is the task of grouping objects according to some measured or perceived characteristics of them and it has owned great success in exploring the hidden structure of unlabeled data sets. Kernel-based clustering algorithms have shown great prominence. They provide competitive performance compared with conventional methods owing to their ability of transforming nonlinear problem into linear ones in a higher dimensional feature space. In this work, we propose a Kernel-based Hierarchical Agglomerative Clustering algorithms (KHAC) using Ward’s criterion. Our method is induced by a recently arisen criterion called Maximum Mean Discrepancy (MMD). This criterion has firstly been proposed to measure difference between different distributions and can easily be embedded into a RKHS. Close relationships have been proved between MMD and Ward's criterion. In our KHAC method, selection of the kernel parameter and determination of the number of clusters have been studied, which provide satisfactory performance. Finally an iterative KHAC algorithm is proposed which aims at determining the optimal kernel parameter, giving a meaningful number of clusters and partitioning the data set automatically
225

Inferência estatística para regressão múltipla h-splines / Statistical inference for h-splines multiple regression

Morellato, Saulo Almeida, 1983- 25 August 2018 (has links)
Orientador: Ronaldo Dias / Tese (doutorado) - Universidade Estadual de Campinas, Instituto de Matemática Estatística e Computação Científica / Made available in DSpace on 2018-08-25T00:25:46Z (GMT). No. of bitstreams: 1 Morellato_SauloAlmeida_D.pdf: 32854783 bytes, checksum: 040664acd0c8f1efe07cedccda8d11f6 (MD5) Previous issue date: 2014 / Resumo: Este trabalho aborda dois problemas de inferência relacionados à regressão múltipla não paramétrica: a estimação em modelos aditivos usando um método não paramétrico e o teste de hipóteses para igualdade de curvas ajustadas a partir do modelo. Na etapa de estimação é construída uma generalização dos métodos h-splines, tanto no contexto sequencial adaptativo proposto por Dias (1999), quanto no contexto bayesiano proposto por Dias e Gamerman (2002). Os métodos h-splines fornecem uma escolha automática do número de bases utilizada na estimação do modelo. Estudos de simulação mostram que os resultados obtidos pelos métodos de estimação propostos são superiores aos conseguidos nos pacotes gamlss, mgcv e DPpackage em R. São criados dois testes de hipóteses para testar H0 : f = f0. Um teste de hipóteses que tem sua regra de decisão baseada na distância quadrática integrada entre duas curvas, referente à abordagem sequencial adaptativa, e outro baseado na medida de evidência bayesiana proposta por Pereira e Stern (1999). No teste de hipóteses bayesiano o desempenho da medida de evidência é observado em vários cenários de simulação. A medida proposta apresentou um comportamento que condiz com uma medida de evidência favorável à hipótese H0. No teste baseado na distância entre curvas, o poder do teste foi estimado em diversos cenários usando simulações e os resultados são satisfatórios. Os procedimentos propostos de estimação e teste de hipóteses são aplicados a um conjunto de dados referente ao trabalho de Tanaka e Nishii (2009) sobre o desmatamento no leste da Ásia. O objetivo é escolher um entre oito modelos candidatos. Os testes concordaram apontando um par de modelos como sendo os mais adequados / Abstract: In this work we discuss two inference problems related to multiple nonparametric regression: estimation in additive models using a nonparametric method and hypotheses testing for equality of curves, also considering additive models. In the estimation step, it is constructed a generalization of the h-splines method, both in the sequential adaptive context proposed by Dias (1999), and in the Bayesian context proposed by Dias and Gamerman (2002). The h-splines methods provide an automatic choice of the number of bases used in the estimation of the model. Simulation studies show that the results obtained by proposed estimation methods are superior to those achieved in the packages gamlss, mgcv and DPpackage in R. Two hypotheses testing are created to test H0 : f = f0. A hypotheses test that has a decision rule based on the integrated squared distance between two curves, for adaptive sequential approach, and another based on the Bayesian evidence measure proposed by Pereira and Stern (1999). In Bayesian hypothesis testing the performance measure of evidence is observed in several simulation scenarios. The proposed measure showed a behavior that is consistent with evidence favorable to H0. In the test based on the distance between the curves, the power of the test was estimated at various scenarios using simulations, and the results are satisfactory. At the end of the work the proposed procedures of estimation and hypotheses testing are applied in a dataset concerning to the work of Tanaka and Nishii (2009) about the deforestation in East Asia. The objective is to choose one amongst eight models. The tests point to a pair of models as being the most suitableIn this work we discuss two inference problems related to multiple nonparametric regression: estimation in additive models using a nonparametric method and hypotheses testing for equality of curves, also considering additive models. In the estimation step, it is constructed a generalization of the h-splines method, both in the sequential adaptive context proposed by Dias (1999), and in the Bayesian context proposed by Dias and Gamerman (2002). The h-splines methods provide an automatic choice of the number of bases used in the estimation of the model. Simulation studies show that the results obtained by proposed estimation methods are superior to those achieved in the packages gamlss, mgcv and DPpackage in R. Two hypotheses testing are created to test H0 : f = f0. A hypotheses test that has a decision rule based on the integrated squared distance between two curves, for adaptive sequential approach, and another based on the Bayesian evidence measure proposed by Pereira and Stern (1999). In Bayesian hypothesis testing the performance measure of evidence is observed in several simulation scenarios. The proposed measure showed a behavior that is consistent with evidence favorable to H0. In the test based on the distance between the curves, the power of the test was estimated at various scenarios using simulations, and the results are satisfactory. At the end of the work the proposed procedures of estimation and hypotheses testing are applied in a dataset concerning to the work of Tanaka and Nishii (2009) about the deforestation in East Asia. The objective is to choose one amongst eight models. The tests point to a pair of models as being the most suitable / Doutorado / Estatistica / Doutor em Estatística
226

O uso de ondaletas em modelos FANOVA / Wavelets FANOVA models

Kist, Airton, 1971- 19 August 2018 (has links)
Orientador: Aluísio de Souza Pinheiro / Tese (doutorado) - Universidade Estadual de Campinas, Instituto de Matemática, Estatística e Computação Científica / Made available in DSpace on 2018-08-19T09:39:03Z (GMT). No. of bitstreams: 1 Kist_Airton_D.pdf: 4639620 bytes, checksum: 2a0cc586e73dd5d71aa0eacf07be101d (MD5) Previous issue date: 2011 / Resumo: O problema de estimação funcional vem sendo estudado de formas variadas na literatura. Uma possibilidade bastante promissora se dá pela utilização de bases ortonormais de wavelets (ondaletas). Essa solução _e interessante por sua: frugalidade; otimalidade assintótica; e velocidade computacional. O objetivo principal do trabalho é estender os testes do modelo FANOVA de efeitos fixos, com erros i.i.d., baseados em ondaletas propostos em Abramovich et al. (2004), para modelos FANOVA de efeitos fixos com erros dependentes. Propomos um procedimento iterativo tipo Cocharane-Orcutt para estimar os parâmetros e a função. A função é estimada de forma não paramétrica via estimador ondaleta que limiariza termo a termo ou estimador linear núcleo ondaleta. Mostramos que, com erros i.i.d., a convergência individual do estimador núcleo ondaleta em pontos diádicos para uma variável aleatória com distribuição normal implica na convergência conjunta deste vetor para uma variável aleatória com distribuição normal multivariada. Além disso, mostramos a convergência em erro quadrático do estimador nos pontos diádicos. Sob uma restrição é possível mostrar que este estimador converge nos pontos diádicos para uma variável com distribuição normal mesmo quando os erros são correlacionados. O vetor das convergências individuais também converge para uma variável normal multivariada / Abstract: The functional estimation problem has been studied variously in the literature. A promising possibility is by use of orthonormal bases of wavelets. This solution is appealing because of its: frugality, asymptotic optimality, and computational speed. The main objective of the work is to extend the tests of fixed effects FANOVA model with iid errors, based on wavelet proposed in Abramovich et al. (2004) to fixed effects FANOVA models with dependent errors. We propose an iterative procedure Cocharane-Orcutt type to estimate the parameters and function. The function is estimated through a nonparametric wavelet estimator that thresholded term by term or wavelet kernel linear estimator. We show that, with iid errors, the individual convergence of the wavelet kernel estimator in dyadic points for a random variable with normal distribution implies the joint convergence of this vector to a random variable with multivariate normal distribution. Furthermore, we show the convergence of the squared error estimator in the dyadic points. Under a restriction is possible to show that this estimator converges in dyadic points to a variable with normal distribution even when errors are correlated. The vector of individual convergences also converges to a multivariate normal variable / Doutorado / Estatistica / Doutor em Estatística
227

Statistical detection for digital image forensics / Détection statistique pour la criminalistique des images numériques

Qiao, Tong 25 April 2016 (has links)
Le XXIème siècle étant le siècle du passage au tout numérique, les médias digitaux jouent un rôle de plus en plus important. Les logiciels sophistiqués de retouche d’images se sont démocratisés et permettent de diffuser facilement des images falsifiées. Ceci pose un problème sociétal puisqu’il s’agit de savoir si ce que l’on voit a été manipulé. Cette thèse s'inscrit dans le cadre de la criminalistique des images. Trois problèmes sont abordés : l'identification de l'origine d'une image, la détection d'informations cachées dans une image et la détection d'un exemple falsification : le rééchantillonnage. Ces travaux s'inscrivent dans le cadre de la théorie de la décision statistique et proposent la construction de détecteurs permettant de respecter une contrainte sur la probabilité de fausse alarme. Afin d'atteindre une performance de détection élevée, il est proposé d'exploiter les propriétés des images naturelles en modélisant les principales étapes de la chaîne d'acquisition d'un appareil photographique. La méthodologie, tout au long de ce manuscrit, consiste à étudier le détecteur optimal donné par le test du rapport de vraisemblance dans le contexte idéal où tous les paramètres du modèle sont connus. Lorsque des paramètres du modèle sont inconnus, ces derniers sont estimés afin de construire le test du rapport de vraisemblance généralisé dont les performances statistiques sont analytiquement établies. De nombreuses expérimentations sur des images simulées et réelles permettent de souligner la pertinence de l'approche proposée / The remarkable evolution of information technologies and digital imaging technology in the past decades allow digital images to be ubiquitous. The tampering of these images has become an unavoidable reality, especially in the field of cybercrime. The credibility and trustworthiness of digital images have been eroded, resulting in important consequences in terms of political, economic, and social issues. To restore the trust to digital images, the field of digital forensics was born. Three important problems are addressed in this thesis: image origin identification, detection of hidden information in a digital image and an example of tampering image detection : the resampling. The goal is to develop a statistical decision approach as reliable as possible that allows to guarantee a prescribed false alarm probability. To this end, the approach involves designing a statistical test within the framework of hypothesis testing theory based on a parametric model that characterizes physical and statistical properties of natural images. This model is developed by studying the image processing pipeline of a digital camera. As part of this work, the difficulty of the presence of unknown parameters is addressed using statistical estimation, making the application of statistical tests straightforward in practice. Numerical experiments on simulated and real images have highlighted the relevance of the proposed approach
228

Statistical transfer matrix-based damage localization and quantification for civil structures / Localisation et quantification statistiques d'endommagements à partir des matrices de transfert pour les structures de génie civil

Bhuyan, Md Delwar Hossain 23 November 2017 (has links)
La localisation de dégâts basée sur les mesures de vibrations est devenue un axe de recherche important pour la surveillance de la santé structurale (SHM). En particulier, la Stochastic Dynamic Damage Locating Vector (SDDLV) est une méthode de localisation des dégâts basée sur le couplage entre un modèle aux éléments finis (FE) de la structure et des paramètres modaux estimés à partir des mesures dynamiques en excitation ambiante dans les états structuraux sain et endommagé, interrogeant les changements dans la matrice de transfert. Dans la première contribution, la méthode SDDLV est étendue avec une approche statistique conjointe utilisant plusieurs ensembles de modes, surmontant la limitation théorique sur le nombre minimal de paramètres. Un autre problème traité est la performance de la méthode en fonction du choix de la variable de Laplace où la fonction de transfert est évaluée. Une attention particulière est accordée à ce choix et à son optimisation. Dans la deuxième contribution, l'approche Influence Line Damage Location (ILDL), complémentaire à l’approche SDDLV est étendue avec un cadre statistique. Dans la dernière contribution, une approche de sensibilité pour les petits dommages est développée en fonction de la différence des matrices de transfert, permettant la localisation des dommages par des tests statistiques dans un cadre gaussien, et en plus la quantification des dommages dans une deuxième étape. Enfin, les méthodes proposées sont validées sur des simulations numériques et leurs performances sont testées dans de nombreuses études de cas sur des expériences de laboratoire. / Vibration-based damage localization has become an important issue for Structural Health Monitoring (SHM). Particularly, the Stochastic Dynamic Damage Locating Vector (SDDLV) method is an output-only damage localization method based on both a Finite Element (FE) model of the structure and modal parameters estimated from output-only measurements in the reference and damaged states of the system, interrogating changes in the transfer matrix. Firstly, the SDDLV method has been extended with a joint statistical approach for multiple mode sets, overcoming the theoretical limitation on the number of modes in previous works. Another problem is that the performance of the method can change considerably depending on the Laplace variable where the transfer function is evaluated. Particular attention is given to this choice and how to optimize it. Secondly, the Influence Line Damage Location (ILDL) approach which is complementary to the SDDLV approach has been extended with a statistical framework. Thirdly, a sensitivity approach for small damages has been developed based on the transfer matrix difference, allowing damage localization through statistical tests in a Gaussian framework, and in addition the quantification of the damage in a second step. Finally, the proposed methods are validated on numerical simulations and their performances are tested extensively in numerous case studies on lab experiments.
229

Model-Based Hypothesis Testing in Biomedicine : How Systems Biology Can Drive the Growth of Scientific Knowledge

Johansson, Rikard January 2017 (has links)
The utilization of mathematical tools within biology and medicine has traditionally been less widespread compared to other hard sciences, such as physics and chemistry. However, an increased need for tools such as data processing, bioinformatics, statistics, and mathematical modeling, have emerged due to advancements during the last decades. These advancements are partly due to the development of high-throughput experimental procedures and techniques, which produce ever increasing amounts of data. For all aspects of biology and medicine, these data reveal a high level of inter-connectivity between components, which operate on many levels of control, and with multiple feedbacks both between and within each level of control. However, the availability of these large-scale data is not synonymous to a detailed mechanistic understanding of the underlying system. Rather, a mechanistic understanding is gained first when we construct a hypothesis, and test its predictions experimentally. Identifying interesting predictions that are quantitative in nature, generally requires mathematical modeling. This, in turn, requires that the studied system can be formulated into a mathematical model, such as a series of ordinary differential equations, where different hypotheses can be expressed as precise mathematical expressions that influence the output of the model. Within specific sub-domains of biology, the utilization of mathematical models have had a long tradition, such as the modeling done on electrophysiology by Hodgkin and Huxley in the 1950s. However, it is only in recent years, with the arrival of the field known as systems biology that mathematical modeling has become more commonplace. The somewhat slow adaptation of mathematical modeling in biology is partly due to historical differences in training and terminology, as well as in a lack of awareness of showcases illustrating how modeling can make a difference, or even be required, for a correct analysis of the experimental data. In this work, I provide such showcases by demonstrating the universality and applicability of mathematical modeling and hypothesis testing in three disparate biological systems. In Paper II, we demonstrate how mathematical modeling is necessary for the correct interpretation and analysis of dominant negative inhibition data in insulin signaling in primary human adipocytes. In Paper III, we use modeling to determine transport rates across the nuclear membrane in yeast cells, and we show how this technique is superior to traditional curve-fitting methods. We also demonstrate the issue of population heterogeneity and the need to account for individual differences between cells and the population at large. In Paper IV, we use mathematical modeling to reject three hypotheses concerning the phenomenon of facilitation in pyramidal nerve cells in rats and mice. We also show how one surviving hypothesis can explain all data and adequately describe independent validation data. Finally, in Paper I, we develop a method for model selection and discrimination using parametric bootstrapping and the combination of several different empirical distributions of traditional statistical tests. We show how the empirical log-likelihood ratio test is the best combination of two tests and how this can be used, not only for model selection, but also for model discrimination. In conclusion, mathematical modeling is a valuable tool for analyzing data and testing biological hypotheses, regardless of the underlying biological system. Further development of modeling methods and applications are therefore important since these will in all likelihood play a crucial role in all future aspects of biology and medicine, especially in dealing with the burden of increasing amounts of data that is made available with new experimental techniques. / Användandet av matematiska verktyg har inom biologi och medicin traditionellt sett varit mindre utbredd jämfört med andra ämnen inom naturvetenskapen, såsom fysik och kemi. Ett ökat behov av verktyg som databehandling, bioinformatik, statistik och matematisk modellering har trätt fram tack vare framsteg under de senaste decennierna. Dessa framsteg är delvis ett resultat av utvecklingen av storskaliga datainsamlingstekniker. Inom alla områden av biologi och medicin så har dessa data avslöjat en hög nivå av interkonnektivitet mellan komponenter, verksamma på många kontrollnivåer och med flera återkopplingar både mellan och inom varje nivå av kontroll. Tillgång till storskaliga data är emellertid inte synonymt med en detaljerad mekanistisk förståelse för det underliggande systemet. Snarare uppnås en mekanisk förståelse först när vi bygger en hypotes vars prediktioner vi kan testa experimentellt. Att identifiera intressanta prediktioner som är av kvantitativ natur, kräver generellt sett matematisk modellering. Detta kräver i sin tur att det studerade systemet kan formuleras till en matematisk modell, såsom en serie ordinära differentialekvationer, där olika hypoteser kan uttryckas som precisa matematiska uttryck som påverkar modellens output. Inom vissa delområden av biologin har utnyttjandet av matematiska modeller haft en lång tradition, såsom den modellering gjord inom elektrofysiologi av Hodgkin och Huxley på 1950‑talet. Det är emellertid just på senare år, med ankomsten av fältet systembiologi, som matematisk modellering har blivit ett vanligt inslag. Den något långsamma adapteringen av matematisk modellering inom biologi är bl.a. grundad i historiska skillnader i träning och terminologi, samt brist på medvetenhet om exempel som illustrerar hur modellering kan göra skillnad och faktiskt ofta är ett krav för en korrekt analys av experimentella data. I detta arbete tillhandahåller jag sådana exempel och demonstrerar den matematiska modelleringens och hypotestestningens allmängiltighet och tillämpbarhet i tre olika biologiska system. I Arbete II visar vi hur matematisk modellering är nödvändig för en korrekt tolkning och analys av dominant-negativ-inhiberingsdata vid insulinsignalering i primära humana adipocyter. I Arbete III använder vi modellering för att bestämma transporthastigheter över cellkärnmembranet i jästceller, och vi visar hur denna teknik är överlägsen traditionella kurvpassningsmetoder. Vi demonstrerar också frågan om populationsheterogenitet och behovet av att ta hänsyn till individuella skillnader mellan celler och befolkningen som helhet. I Arbete IV använder vi matematisk modellering för att förkasta tre hypoteser om hur fenomenet facilitering uppstår i pyramidala nervceller hos råttor och möss. Vi visar också hur en överlevande hypotes kan beskriva all data, inklusive oberoende valideringsdata. Slutligen utvecklar vi i Arbete I en metod för modellselektion och modelldiskriminering med hjälp av parametrisk ”bootstrapping” samt kombinationen av olika empiriska fördelningar av traditionella statistiska tester. Vi visar hur det empiriska ”log-likelihood-ratio-testet” är den bästa kombinationen av två tester och hur testet är applicerbart, inte bara för modellselektion, utan också för modelldiskriminering. Sammanfattningsvis är matematisk modellering ett värdefullt verktyg för att analysera data och testa biologiska hypoteser, oavsett underliggande biologiskt system. Vidare utveckling av modelleringsmetoder och tillämpningar är därför viktigt eftersom dessa sannolikt kommer att spela en avgörande roll i framtiden för biologi och medicin, särskilt när det gäller att hantera belastningen från ökande datamängder som blir tillgänglig med nya experimentella tekniker.
230

Statistické testy pro VaR a CVaR / Statistical tests for VaR and CVaR

Mirtes, Lukáš January 2016 (has links)
The thesis presents test statistics of Value-at-Risk and Conditional Value-at-Risk. The reader is familiar with basic nonparametric estimators and their asymptotic distributions. Tests of accuracy of Value-at- Risk are explained and asymptotic test of Conditional Value-at-Risk is derived. The thesis is concluded by process of backtesting of Value-at-Risk model using real data and computing statistical power and probability of Type I error for selected tests. Powered by TCPDF (www.tcpdf.org)

Page generated in 0.0813 seconds