• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 113
  • 12
  • 11
  • 6
  • 3
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • Tagged with
  • 176
  • 176
  • 93
  • 32
  • 29
  • 25
  • 20
  • 18
  • 18
  • 16
  • 15
  • 15
  • 15
  • 14
  • 13
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
81

Efficacité, généricité et praticabilité de l'attaque par information mutuelle utilisant la méthode d'estimation de densité par noyau / Efficiency, genericity and practicability of Kerned-based mutual information analysis

Carbone, Mathieu 16 March 2015 (has links)
De nos jours, les attaques par canaux auxiliaires sont facilement réalisables et très puissantes face aux implémentations cryptographiques. Cela pose une sérieuse menace en ce qui concerne la sécurité des crypto-systèmes. En effet, l'exécution d'un algorithme cryptographique produit inévitablement des fuites d'information liées aux données internes manipulées par le cryptosystèmes à travers des canaux auxiliaires (temps, température, consommation de courant, émissions électro-magnétiques, etc.). Certaines d'entre elles étant sensibles, un attaquant peut donc les exploiter afin de retrouver la clé secrète. Une des étapes les plus importantes d'une attaque par canaux auxiliaires est de quantifier la dépendance entre une quantité physique mesurée et un modèle de fuite supposé. Pour se faire, un outil statistique, aussi appelé distingueur, est utilisé dans le but de trouver une estimation de la clé secrète. Dans la littérature, une pléthore de distingueurs a été proposée. Cette thèse porte sur l'attaque utilisant l'information mutuelle comme distingueur, appelé l'attaque par information mutuelle. Dans un premier temps, nous proposons de combler le fossé d'un des problèmes majeurs concernant l'estimation du coefficient d'information mutuelle, lui-même demandant l'estimation de densité. Nos investigations ont été menées en utilisant une méthode non paramétrique pour l'estimation de densité: l'estimation par noyau. Une approche de sélection de la largeur de fenêtre basée sur l'adaptativité est proposée sous forme d'un critère (spécifique au cas des attaques par canaux auxiliaires). Par conséquent, une analyse est menée pour donner une ligne directrice afin de rendre l'attaque par information mutuelle optimale et générique selon la largeur de fenêtre mais aussi d'établir quel contexte (relié au moment statistique de la fuite) est plus favorable pour l'attaque par information mutuelle. Dans un second temps, nous abordons un autre problème lié au temps de calcul élevé (étroitement lié à la largeur de la fenêtre) de l'attaque par information mutuelle utilisant la méthode du noyau. Nous évaluons un algorithme appelé Arbre Dual permettant des évaluations rapides de fonctions noyau. Nous avons aussi montré expérimentalement que l'attaque par information mutuelle dans le domaine fréquentiel, est efficace et rapide quand celle-ci est combinée avec l'utilisation d'un modèle fréquentiel de fuite. En outre, nous avons aussi suggéré une extension d'une méthode déjà existante pour détecter une fuite basée sur un moment statistique d'ordre supérieur. / Nowadays, Side-Channel Analysis (SCA) are easy-to-implement whilst powerful attacks against cryptographic implementations posing a serious threat to the security of cryptosystems for the designers. Indeed, the execution of cryptographic algorithms unvoidably leaks information about internally manipulated data of the cryptosystem through side-channels (time, temperature, power consumption, electromagnetic emanations, etc), for which some of them are sensible(depending on the secret key). One of the most important SCA steps for an adversary is to quantify the dependency between the measured side-channel leakage and an assumed leakage model using a statistical tool, also called distinguisher, in order to find an estimation of the secret key. In the SCA literature, a plethora of distinguishers have been proposed. This thesis focuses on Mutual Information (MI) based attacks, the so-called Mutual Information Analysis (MIA) and proposes to fill the gap of the major practical issue consisting in estimating MI index which itself requires the estimation of underlying distributions. Investigations are conducted using the popular statistical technique for estimating the underlying density distribution with minimal assumptions: Kernel Density Estimation (KDE). First, a bandwidth selection scheme based on an adaptivity criterion is proposed. This criterion is specific to SCA.As a result, an in-depth analysis is conducted in order to provide a guideline to make MIA efficient and generic with respect to this tuning hyperparameter but also to establish which attack context (connected to the statistical moment of leakage) is favorable of MIA. Then, we address another issue of the kernel-based MIA lying in the computational burden through a so-called Dual-Tree algorithm allowing fast evaluations of 'pair-wise` kernel functions. We also showed that MIA running into the frequency domain is really effective and fast when combined with the use of an accurate frequency leakage model. Additionally, we suggested an extension of an existing method to detect leakage embedded on higher-order statistical moments.
82

Resampling Evaluation of Signal Detection and Classification : With Special Reference to Breast Cancer, Computer-Aided Detection and the Free-Response Approach

Bornefalk Hermansson, Anna January 2007 (has links)
<p>The first part of this thesis is concerned with trend modelling of breast cancer mortality rates. By using an age-period-cohort model, the relative contributions of period and cohort effects are evaluated once the unquestionable existence of the age effect is controlled for. The result of such a modelling gives indications in the search for explanatory factors. While this type of modelling is usually performed with 5-year period intervals, the use of 1-year period data, as in Paper I, may be more appropriate.</p><p>The main theme of the thesis is the evaluation of the ability to detect signals in x-ray images of breasts. Early detection is the most important tool to achieve a reduction in breast cancer mortality rates, and computer-aided detection systems can be an aid for the radiologist in the diagnosing process.</p><p>The evaluation of computer-aided detection systems includes the estimation of distributions. One way of obtaining estimates of distributions when no assumptions are at hand is kernel density estimation, or the adaptive version thereof that smoothes to a greater extent in the tails of the distribution, thereby reducing spurious effects caused by outliers. The technique is described in the context of econometrics in Paper II and then applied together with the bootstrap in the breast cancer research area in Papers III-V.</p><p>Here, estimates of the sampling distributions of different parameters are used in a new model for free-response receiver operating characteristic (FROC) curve analysis. Compared to earlier work in the field, this model benefits from the advantage of not assuming independence of detections in the images, and in particular, from the incorporation of the sampling distribution of the system's operating point.</p><p>Confidence intervals obtained from the proposed model with different approaches with respect to the estimation of the distributions and the confidence interval extraction methods are compared in terms of coverage and length of the intervals by simulations of lifelike data.</p>
83

Comparison of plot survey and distance sampling as pellet group counts for deer in Sweden

Eckervall, Anneli January 2008 (has links)
Wildlife management deals with problems concerning sustainable harvest, conservation of threatened species and adjustment of wildlife populations to levels acceptable to for instance forestry, agriculture, traffic and conservation interests. A detailed knowledge of the population is then required. It is therefore important to develop reliable and cost-efficient survey methods. The purpose of this study was to test the distance sampling method where objects are observed while walking along a line, as a way of counting deer pellet groups and to compare the results with ordinary plot surveys. The inventory speed for distance sampling increases with increasing amount of droppings/km2. The amount of droppings seems to have little or no effect on the inventory speed of the plot survey method. Therefore the plot survey method could be a better alternative than the distance sampling method when the densities of droppings are high and vice versa. When comparing the two methods estimates of animal densities with data (orally) from game managers based on other surveys and flying observations and estimations in the different areas, both methods showed too low density for red deer in Valinge. This indicates that the supplementary feeding seem to have an effect on the results of red deer density for both methods.
84

Comparison of plot survey and distance sampling as pellet group counts for deer in Sweden

Eckervall, Anneli January 2008 (has links)
<p>Wildlife management deals with problems concerning sustainable harvest, conservation of threatened species and adjustment of wildlife populations to levels acceptable to for instance forestry, agriculture, traffic and conservation interests. A detailed knowledge of the population is then required. It is therefore important to develop reliable and cost-efficient survey methods.</p><p>The purpose of this study was to test the distance sampling method where objects are observed while walking along a line, as a way of counting deer pellet groups and to compare the results with ordinary plot surveys.</p><p>The inventory speed for distance sampling increases with increasing amount of droppings/km2. The amount of droppings seems to have little or no effect on the inventory speed of the plot survey method. Therefore the plot survey method could be a better alternative than the distance sampling method when the densities of droppings are high and vice versa.</p><p>When comparing the two methods estimates of animal densities with data (orally) from game managers based on other surveys and flying observations and estimations in the different areas, both methods showed too low density for red deer in Valinge. This indicates that the supplementary feeding seem to have an effect on the results of red deer density for both methods.</p>
85

STATISTICS IN THE BILLERA-HOLMES-VOGTMANN TREESPACE

Weyenberg, Grady S. 01 January 2015 (has links)
This dissertation is an effort to adapt two classical non-parametric statistical techniques, kernel density estimation (KDE) and principal components analysis (PCA), to the Billera-Holmes-Vogtmann (BHV) metric space for phylogenetic trees. This adaption gives a more general framework for developing and testing various hypotheses about apparent differences or similarities between sets of phylogenetic trees than currently exists. For example, while the majority of gene histories found in a clade of organisms are expected to be generated by a common evolutionary process, numerous other coexisting processes (e.g. horizontal gene transfers, gene duplication and subsequent neofunctionalization) will cause some genes to exhibit a history quite distinct from the histories of the majority of genes. Such “outlying” gene trees are considered to be biologically interesting and identifying these genes has become an important problem in phylogenetics. The R sofware package kdetrees, developed in Chapter 2, contains an implementation of the kernel density estimation method. The primary theoretical difficulty involved in this adaptation concerns the normalizion of the kernel functions in the BHV metric space. This problem is addressed in Chapter 3. In both chapters, the software package is applied to both simulated and empirical datasets to demonstrate the properties of the method. A few first theoretical steps in adaption of principal components analysis to the BHV space are presented in Chapter 4. It becomes necessary to generalize the notion of a set of perpendicular vectors in Euclidean space to the BHV metric space, but there some ambiguity about how to best proceed. We show that convex hulls are one reasonable approach to the problem. The Nye-PCA- algorithm provides a method of projecting onto arbitrary convex hulls in BHV space, providing the core of a modified PCA-type method.
86

An Analysis Tool for Flight Dynamics Monte Carlo Simulations

Restrepo, Carolina 1982- 16 December 2013 (has links)
Spacecraft design is inherently difficult due to the nonlinearity of the systems involved as well as the expense of testing hardware in a realistic environment. The number and cost of flight tests can be reduced by performing extensive simulation and analysis work to understand vehicle operating limits and identify circumstances that lead to mission failure. A Monte Carlo simulation approach that varies a wide range of physical parameters is typically used to generate thousands of test cases. Currently, the data analysis process for a fully integrated spacecraft is mostly performed manually on a case-by-case basis, often requiring several analysts to write additional scripts in order to sort through the large data sets. There is no single method that can be used to identify these complex variable interactions in a reliable and timely manner as well as be applied to a wide range of flight dynamics problems. This dissertation investigates the feasibility of a unified, general approach to the process of analyzing flight dynamics Monte Carlo data. The main contribution of this work is the development of a systematic approach to finding and ranking the most influential variables and combinations of variables for a given system failure. Specifically, a practical and interactive analysis tool that uses tractable pattern recognition methods to automate the analysis process has been developed. The analysis tool has two main parts: the analysis of individual influential variables and the analysis of influential combinations of variables. This dissertation describes in detail the two main algorithms used: kernel density estimation and nearest neighbors. Both are non-parametric density estimation methods that are used to analyze hundreds of variables and combinations thereof to provide an analyst with insightful information about the potential cause for a specific system failure. Examples of dynamical systems analysis tasks using the tool are provided.
87

Resampling Evaluation of Signal Detection and Classification : With Special Reference to Breast Cancer, Computer-Aided Detection and the Free-Response Approach

Bornefalk Hermansson, Anna January 2007 (has links)
The first part of this thesis is concerned with trend modelling of breast cancer mortality rates. By using an age-period-cohort model, the relative contributions of period and cohort effects are evaluated once the unquestionable existence of the age effect is controlled for. The result of such a modelling gives indications in the search for explanatory factors. While this type of modelling is usually performed with 5-year period intervals, the use of 1-year period data, as in Paper I, may be more appropriate. The main theme of the thesis is the evaluation of the ability to detect signals in x-ray images of breasts. Early detection is the most important tool to achieve a reduction in breast cancer mortality rates, and computer-aided detection systems can be an aid for the radiologist in the diagnosing process. The evaluation of computer-aided detection systems includes the estimation of distributions. One way of obtaining estimates of distributions when no assumptions are at hand is kernel density estimation, or the adaptive version thereof that smoothes to a greater extent in the tails of the distribution, thereby reducing spurious effects caused by outliers. The technique is described in the context of econometrics in Paper II and then applied together with the bootstrap in the breast cancer research area in Papers III-V. Here, estimates of the sampling distributions of different parameters are used in a new model for free-response receiver operating characteristic (FROC) curve analysis. Compared to earlier work in the field, this model benefits from the advantage of not assuming independence of detections in the images, and in particular, from the incorporation of the sampling distribution of the system's operating point. Confidence intervals obtained from the proposed model with different approaches with respect to the estimation of the distributions and the confidence interval extraction methods are compared in terms of coverage and length of the intervals by simulations of lifelike data.
88

On probability distributions of diffusions and financial models with non-globally smooth coefficients

De Marco, Stefano 23 November 2010 (has links) (PDF)
Some recent works in the field of mathematical finance have brought new light on the importance of studying the regularity and the tail asymptotics of distributions for certain classes of diffusions with non-globally smooth coefficients. In this Ph.D. dissertation we deal with some issues in this framework. In a first part, we study the existence, smoothness and space asymptotics of densities for the solutions of stochastic differential equations assuming only local conditions on the coefficients of the equation. Our analysis is based on Malliavin calculus tools and on " tube estimates " for Ito processes, namely estimates for the probability that the trajectory of an Ito process remains close to a deterministic curve. We obtain significant estimates of densities and distribution functions in general classes of option pricing models, including generalisations of CIR and CEV processes and Local-Stochastic Volatility models. In the latter case, the estimates we derive have an impact on the moment explosion of the underlying price and, consequently, on the large-strike behaviour of the implied volatility. Parametric implied volatility modeling, in its turn, makes the object of the second part. In particular, we focus on J. Gatheral's SVI model, first proposing an effective quasi-explicit calibration procedure and displaying its performances on market data. Then, we analyse the capability of SVI to generate efficient approximations of symmetric smiles, building an explicit time-dependent parameterization. We provide and test the numerical application to the Heston model (without and with displacement), for which we generate semi-closed expressions of the smile
89

Comparing two populations using Bayesian Fourier series density estimation / Comparação de duas populações utilizando estimação bayesiana de densidades por séries de Fourier

Marco Henrique de Almeida Inácio 12 April 2017 (has links)
Given two samples from two populations, one could ask how similar the populations are, that is, how close their probability distributions are. For absolutely continuous distributions, one way to measure the proximity of such populations is to use a measure of distance (metric) between the probability density functions (which are unknown given that only samples are observed). In this work, we work with the integrated squared distance as metric. To measure the uncertainty of the squared integrated distance, we first model the uncertainty of each of the probability density functions using a nonparametric Bayesian method. The method consists of estimating the probability density function f (or its logarithm) using Fourier series {f0;f1; :::;fI}. Assigning a prior distribution to f is then equivalent to assigning a prior distribution to the coefficients of this series. We used the prior suggested by Scricciolo (2006) (sieve prior), which not only places a prior on such coefficients, but also on I itself, so that in reality we work with a Bayesian mixture of finite dimensional models. To obtain posterior samples of such mixture, we marginalize out the discrete model index parameter I and use a statistical software called Stan. We conclude that the Bayesian Fourier series method has good performance when compared to kernel density estimation, although both methods often have problems in the estimation of the probability density function near the boundaries. Lastly, we showed how the methodology of Fourier series can be used to access the uncertainty regarding the similarity of two samples. In particular, we applied this method to dataset of patients with Alzheimer. / Dadas duas amostras de duas populações, pode-se questionar o quão parecidas as duas populações são, ou seja, o quão próximas estão suas distribuições de probabilidade. Para distribuições absolutamente contínuas, uma maneira de mensurar a proximidade dessas populações é utilizando uma medida de distância (métrica) entre as funções densidade de probabilidade (as quais são desconhecidas, em virtude de observarmos apenas as amostras). Nesta dissertação, utilizamos a distância quadrática integrada como métrica. Para mensurar a incerteza da distância quadrática integrada, primeiramente modelamos a incerteza sobre cada uma das funções densidade de probabilidade através de uma método bayesiano não paramétrico. O método consiste em estimar a função de densidade de probabilidade f (ou seu logaritmo) usando séries de Fourier {f0;f1; :::;fI}. Atribuir uma distribuição a priori para f é então equivalente a atribuir uma distribuição a priori aos coeficientes dessa serie. Utilizamos a priori sugerida em Scricciolo (2006) (priori de sieve), a qual não coloca uma priori somente nesses coeficientes, mas também no próprio I, de modo que, na realidade, trabalhamos com uma mistura bayesiana de modelos de dimensão finita. Para obter amostras a posteriori dessas misturas, marginalizamos o parâmetro (discreto) de indexação de modelos, I, e usamos um software estatístico chamado Stan. Concluímos que o método bayesiano de séries de Fourier tem boa performance quando comparado ao de estimativa de densidade kernel, apesar de ambos os métodos frequentemente apresentarem problemas na estimação da função de densidade de probabilidade perto das fronteiras. Por fim, mostramos como a metodologia de series de Fourier pode ser utilizada para mensurar a incerteza a cerca da similaridade de duas amostras. Em particular, aplicamos este método a um conjunto de dados de pacientes com doença de Alzheimer.
90

Sur l'estimation adaptative d'une densité multivariée sous l'hypothèse de la structure d'indépendance / On adaptive estimation of a multivariate density under independence hypothesis.

Rebelles, Gilles 10 December 2015 (has links)
Les résultats obtenus dans cette thèse concernent l'estimation non paramétrique de densités de probabilité. Principalement, nous nous intéressons à estimer une densité de probabilité multidimensionnelle de régularité anisotrope et inhomogène. Nous proposons des procédures d'estimation qui sont adaptatives, non seulement par rapport aux paramètres de régularité, mais aussi par rapport à la structure d'indépendance de la densité de probabilité estimée. Cela nous permet de réduire l'influence de la dimension du domaine d'observation sur la qualité d'estimation et de faire en sorte que cette dernière soit la meilleure possible. Pour analyser la performance de nos méthodes nous adoptons le point de vue minimax et nous généralisons un critère d'optimalité pour l'estimation adaptative. L'utilisation du critère que nous proposons s'impose lorsque le paramètre d'intérêt est estimé en un point fixé car, dans ce cas, il y a un "prix à payer" pour l'adaptation par rapport à la régularité et à la structure d'indépendance. Cela n'est plus vrai lorsque l'estimation est globale. Dans le modèle de densité (avec des observations directes) nous considérons le problème de l'estimation ponctuelle et celui de l'estimation en norme $bL_p$, $pin[1,infty)$. Dans le modèle de déconvolution (avec des observations bruitées) nous étudions le problème de l'estimation en norme $bL_p$, $pin[1,infty]$, dans le cas où la fonction caractéristique du bruit décroît polynomialement à l'infini. Chaque estimateur que nous proposons est obtenu par une procédure de sélection aléatoire dans une famille d'estimateurs à noyau. / The results obtained in this thesis concern the non parametric estimation of probability densities. Primarily, we are interested in estimating a multivariate probability density which is anisotropic and inhomogeneous. We propose estimation procedures that enable us to take into account the regularity properties of the underlying probability density and its independence structure simultaneously. This allows us to reduce the influence of the dimension of the observation space on the accuracy of estimation and then to improve it. To analyze the performance of our methods we adopt the minimax point of view and we generalize a criterion of optimality for adaptive estimation. The use of the criterion we propose is necessary for estimation at a fixed point. Indeed, in this setting, there is a "penalty" for adaptation with respect to the regularity and to the independence structure. This is no longer true for global estimation. In the density model (with direct observations) we consider both the problem of pointwise estimation and the problem of estimation under $bL_p$-loss ($pin[1,infty)$). In the deconvolution model (with noisy observations) we study the problem of estimation with an $bL_p$-risk ($pin[1,infty]$) when the characteristic function of the noise decreases polynomially at infinity. Any estimator that we propose is obtained by a random selection procedure in a family of kernel estimators.

Page generated in 0.1665 seconds