Spelling suggestions: "subject:"kullbackleibler divergence"" "subject:"callbackfunktioner divergence""
21 |
Statistical Incipient Fault Detection and Diagnosis with Kullback-Leibler Divergence : from Theory to Applications / Détection et diagnostic de défauts naissants en utilisant la divergence de Kullback-Leibler : De la théorie aux applicationsHarmouche, Jinane 20 November 2014 (has links)
Les travaux de cette thèse portent sur la détection et le diagnostic des défauts naissants dans les systèmes d’ingénierie et industriels, par des approches statistiques non-paramétriques. Un défaut naissant est censé provoquer comme tout défaut un changement anormal dans les mesures des variables du système. Ce changement est imperceptible mais aussi imprévisible dû à l’important rapport signal-sur défaut, et le faible rapport défaut-sur-bruit caractérisant le défaut naissant. La détection et l’identification d’un changement général nécessite une approche globale qui prend en compte la totalité de la signature des défauts. Dans ce cadre, la divergence de Kullback-Leibler est proposée comme indicateur général de défauts, sensible aux petites variations anormales cachées dans les variations du bruit. Une approche d’analyse spectrale globale est également proposée pour le diagnostic de défauts ayant une signature fréquentielle. L’application de l’approche statistique globale est illustrée sur deux études différentes. La première concerne la détection et la caractérisation, par courants de Foucault, des fissures dans les structures conductrices. La deuxième application concerne le diagnostic des défauts de roulements dans les machines électriques tournantes. En outre, ce travail traite le problème d’estimation de l’amplitude des défauts naissants. Une analyse théorique menée dans le cadre d’une modélisation par analyse en composantes principales, conduit à un modèle analytique de la divergence ne dépendant que des paramètres du défaut. / This phD dissertation deals with the detection and diagnosis of incipient faults in engineering and industrial systems by non-parametric statistical approaches. An incipient fault is supposed to provoke an abnormal change in the measurements of the system variables. However, this change is imperceptible and also unpredictable due to the large signal-to-fault ratio and the low fault-to-noise ratio characterizing the incipient fault. The detection and identification of a global change require a ’global’ approach that takes into account the total faults signature. In this context, the Kullback-Leibler divergence is considered to be a ’global’ fault indicator, which is recommended sensitive to abnormal small variations hidden in noise. A ’global’ spectral analysis approach is also proposed for the diagnosis of faults with a frequency signature. The ’global’ statistical approach is proved on two application studies. The first one concerns the detection and characterization of minor cracks in conductive structures. The second application concerns the diagnosis of bearing faults in electrical rotating machines. In addition, the fault estimation problem is addressed in this work. A theoretical study is conducted to obtain an analytical model of the KL divergence, from which an estimate of the amplitude of the incipient fault is derived.
|
22 |
Maximum Likelihood Theory for Retention of Effect Non-Inferiority Trials / Maxmimum Likelihood Theorie für Retention of Effect Nicht-UnterlegenheitsstudienMielke, Matthias 15 March 2010 (has links)
No description available.
|
23 |
Análise bayesiana objetiva para as distribuições normal generalizada e lognormal generalizadaJesus, Sandra Rêgo de 21 November 2014 (has links)
Made available in DSpace on 2016-06-02T20:04:53Z (GMT). No. of bitstreams: 1
6424.pdf: 5426262 bytes, checksum: 82bb9386f85845b0d3db787265ea8236 (MD5)
Previous issue date: 2014-11-21 / The Generalized Normal (GN) and Generalized lognormal (logGN) distributions are flexible for accommodating features present in the data that are not captured by traditional distribution, such as the normal and the lognormal ones, respectively. These distributions are considered to be tools for the reduction of outliers and for the obtention of robust estimates. However, computational problems have always been the major obstacle to obtain the effective use of these distributions. This paper proposes the Bayesian reference analysis methodology to estimate the GN and logGN. The reference prior for a possible order of the model parameters is obtained. It is shown that the reference prior leads to a proper posterior distribution for all the proposed model. The development of Monte Carlo Markov Chain (MCMC) is considered for inference purposes. To detect possible influential observations in the models considered, the Bayesian method of influence analysis on a case based on the Kullback-Leibler divergence is used. In addition, a scale mixture of uniform representation of the GN and logGN distributions are exploited, as an alternative method in order, to allow the development of efficient Gibbs sampling algorithms. Simulation studies were performed to analyze the frequentist properties of the estimation procedures. Real data applications demonstrate the use of the proposed models. / As distribuições normal generalizada (NG) e lognormal generalizada (logNG) são flexíveis por acomodarem características presentes nos dados que não são capturadas por distribuições tradicionais, como a normal e a lognormal, respectivamente. Essas distribuições são consideradas ferramentas para reduzir as observações aberrantes e obter estimativas robustas. Entretanto o maior obstáculo para a utilização eficiente dessas distribuições tem sido os problemas computacionais. Este trabalho propõe a metodologia da análise de referência Bayesiana para estimar os parâmetros dos modelos NG e logNG. A função a priori de referência para uma possível ordem dos parâmetros do modelo é obtida. Mostra-se que a função a priori de referência conduz a uma distribuição a posteriori própria, em todos os modelos propostos. Para fins de inferência, é considerado o desenvolvimento de métodos Monte Carlo em Cadeias de Markov (MCMC). Para detectar possíveis observações influentes nos modelos considerados, é utilizado o método Bayesiano de análise de influência caso a caso, baseado na divergência de Kullback-Leibler. Além disso, uma representação de mistura de escala uniforme para as distribuições NG e logNG é utilizada, como um método alternativo, para permitir o desenvolvimento de algoritmos de amostrador de Gibbs. Estudos de simulação foram desenvolvidos para analisar as propriedades frequentistas dos processos de estimação. Aplicações a conjuntos de dados reais mostraram a aplicabilidade dos modelos propostos.
|
24 |
Neuronal Dissimilarity Indices that Predict Oddball Detection in BehaviourVaidhiyan, Nidhin Koshy January 2016 (has links) (PDF)
Our vision is as yet unsurpassed by machines because of the sophisticated representations of objects in our brains. This representation is vastly different from a pixel-based representation used in machine storages. It is this sophisticated representation that enables us to perceive two faces as very different, i.e, they are far apart in the “perceptual space”, even though they are close to each other in their pixel-based representations. Neuroscientists have proposed distances between responses of neurons to the images (as measured in macaque monkeys) as a quantification of the “perceptual distance” between the images. Let us call these neuronal dissimilarity indices of perceptual distances. They have also proposed behavioural experiments to quantify these perceptual distances. Human subjects are asked to identify, as quickly as possible, an oddball image embedded among multiple distractor images. The reciprocal of the search times for identifying the oddball is taken as a measure of perceptual distance between the oddball and the distractor. Let us call such estimates as behavioural dissimilarity indices. In this thesis, we describe a decision-theoretic model for visual search that suggests a connection between these two notions of perceptual distances.
In the first part of the thesis, we model visual search as an active sequential hypothesis testing problem. Our analysis suggests an appropriate neuronal dissimilarity index which correlates strongly with the reciprocal of search times. We also consider a number of alternative possibilities such as relative entropy (Kullback-Leibler divergence), the Chernoff entropy and the L1-distance associated with the neuronal firing rate profiles. We then come up with a means to rank the various neuronal dissimilarity indices based on how well they explain the behavioural observations. Our proposed dissimilarity index does better than the other three, followed by relative entropy, then Chernoff entropy and then L1 distance.
In the second part of the thesis, we consider a scenario where the subject has to find an oddball image, but without any prior knowledge of the oddball and distractor images. Equivalently, in the neuronal space, the task for the decision maker is to find the image that elicits firing rates different from the others. Here, the decision maker has to “learn” the underlying statistics and then make a decision on the oddball. We model this scenario as one of detecting an odd Poisson point process having a rate different from the common rate of the others. The revised model suggests a new neuronal dissimilarity index. The new dissimilarity index is also strongly correlated with the behavioural data. However, the new dissimilarity index performs worse than the dissimilarity index proposed in the first part on existing behavioural data. The degradation in performance may be attributed to the experimental setup used for the current behavioural tasks, where search tasks associated with a given image pair were sequenced one after another, thereby possibly cueing the subject about the upcoming image pair, and thus violating the assumption of this part on the lack of prior knowledge of the image pairs to the decision maker.
In conclusion, the thesis provides a framework for connecting the perceptual distances in the neuronal and the behavioural spaces. Our framework can possibly be used to analyze the connection between the neuronal space and the behavioural space for various other behavioural tasks.
|
25 |
[en] APPROXIMATE NEAREST NEIGHBOR SEARCH FOR THE KULLBACK-LEIBLER DIVERGENCE / [pt] BUSCA APROXIMADA DE VIZINHOS MAIS PRÓXIMOS PARA DIVERGÊNCIA DE KULLBACK-LEIBLER19 March 2018 (has links)
[pt] Em uma série de aplicações, os pontos de dados podem ser representados como distribuições de probabilidade. Por exemplo, os documentos podem ser representados como modelos de tópicos, as imagens podem ser representadas como histogramas e também a música pode ser representada como uma distribuição de probabilidade. Neste trabalho, abordamos o problema do Vizinho Próximo Aproximado onde os pontos são distribuições de probabilidade e a função de distância é a divergência de Kullback-Leibler (KL). Mostramos como acelerar as estruturas de dados existentes, como a Bregman Ball Tree, em teoria, colocando a divergência KL como um produto interno. No lado prático, investigamos o uso de duas técnicas de indexação muito populares: Índice Invertido e Locality Sensitive Hashing. Os experimentos realizados em 6 conjuntos de dados do mundo real mostraram que o Índice Invertido é melhor do que LSH e Bregman Ball Tree, em termos
de consultas por segundo e precisão. / [en] In a number of applications, data points can be represented as probability distributions. For instance, documents can be represented as topic models, images can be represented as histograms and also music can be represented as a probability distribution. In this work, we address the problem of the Approximate Nearest Neighbor where the points are probability distributions and the distance function is the Kullback-Leibler (KL) divergence. We show how to accelerate existing data structures such as the Bregman Ball Tree, by posing the KL divergence as an inner product embedding. On the practical side we investigated the use of two, very popular, indexing techniques: Inverted Index and Locality Sensitive Hashing. Experiments performed on 6 real world data-sets showed the Inverted Index performs better than LSH and Bregman Ball Tree, in terms of queries per second and precision.
|
26 |
Quality strategy and method for transmission : application to image / Évaluation de la qualité des images dans un contexte de transmissionXie, Xinwen 10 January 2019 (has links)
Cette thèse porte sur l’étude des stratégies d’amélioration de la qualité d’image dans les systèmes de communication sans fil et sur la conception de nouvelles métriques d’évaluation de la qualité. Tout d'abord, une nouvelle métrique de qualité d'image à référence réduite, basée sur un modèle statistique dans le domaine des ondelettes complexes, a été proposée. Les informations d’amplitude et de phase relatives des coefficients issues de la transformée en ondelettes complexes sont modélisées à l'aide de fonctions de densité de probabilité. Les paramètres associés à ces fonctions constituent la référence réduite qui sera transmise au récepteur. Ensuite, une approche basée sur les réseaux de neurones à régression généralisée est exploitée pour construire la relation de cartographie entre les caractéristiques de la référence réduite et le score objectif.Deuxièmement, avec la nouvelle métrique, une nouvelle stratégie de décodage est proposée pour la transmission d’image sur un canal de transmission sans fil réaliste. Ainsi, la qualité d’expérience (QoE) est améliorée tout en garantissant une bonne qualité de service (QoS). Pour cela, une nouvelle base d’images a été construite et des tests d’évaluation subjective de la qualité de ces images ont été effectués pour collecter les préférences visuelles des personnes lorsqu’elles sélectionnent les images avec différentes configurations de décodage. Un classificateur basé sur les algorithmes SVM et des k plus proches voisins sont utilisés pour la sélection automatique de la meilleure configuration de décodage.Enfin, une amélioration de la métrique a été proposée permettant de mieux prendre en compte les spécificités de la distorsion et la préférence des utilisateurs. Pour cela, nous avons combiné les caractéristiques globales et locales de l’image conduisant ainsi à une amélioration de la stratégie de décodage.Les résultats expérimentaux valident l'efficacité des métriques de qualité d'image et des stratégies de transmission d’images proposées. / This thesis focuses on the study of image quality strategies in wireless communication systems and the design of new quality evaluation metrics:Firstly, a new reduced-reference image quality metric, based on statistical model in complex wavelet domain, has been proposed. The magnitude and the relative phase information of the Dual-tree Complex Wavelet Transform coefficients are modelled by using probability density function and the parameters served as reduced-reference features which will be transmitted to the receiver. Then, a Generalized Regression Neural Network approach is exploited to construct the mapping relation between reduced-reference feature and the objective score.Secondly, with the new metric, a new decoding strategy is proposed for a realistic wireless transmission system, which can improve the quality of experience (QoE) while ensuring the quality of service (QoS). For this, a new database including large physiological vision tests has been constructed to collect the visual preference of people when they are selecting the images with different decoding configurations, and a classifier based on support vector machine or K-nearest neighboring is utilized to automatically select the decoding configuration.Finally, according to specific property of the distortion and people's preference, an improved metric has been proposed. It is the combination of global feature and local feature and has been demonstrated having a good performance in optimization of the decoding strategy.The experimental results validate the effectiveness of the proposed image quality metrics and the quality strategies.
|
27 |
Segmenta??o Fuzzy de Texturas e V?deosSantos, Tiago Souza dos 17 August 2012 (has links)
Made available in DSpace on 2014-12-17T15:48:04Z (GMT). No. of bitstreams: 1
TiagoSS_DISSERT.pdf: 2900373 bytes, checksum: ea7bd73351348f5c75a5bf4f337c599f (MD5)
Previous issue date: 2012-08-17 / Conselho Nacional de Desenvolvimento Cient?fico e Tecnol?gico / The segmentation of an image aims to subdivide it into constituent regions or objects
that have some relevant semantic content. This subdivision can also be applied to videos.
However, in these cases, the objects appear in various frames that compose the videos.
The task of segmenting an image becomes more complex when they are composed of
objects that are defined by textural features, where the color information alone is not
a good descriptor of the image. Fuzzy Segmentation is a region-growing segmentation
algorithm that uses affinity functions in order to assign to each element in an image a
grade of membership for each object (between 0 and 1). This work presents a modification
of the Fuzzy Segmentation algorithm, for the purpose of improving the temporal and
spatial complexity. The algorithm was adapted to segmenting color videos, treating them
as 3D volume. In order to perform segmentation in videos, conventional color model
or a hybrid model obtained by a method for choosing the best channels were used. The
Fuzzy Segmentation algorithm was also applied to texture segmentation by using adaptive
affinity functions defined for each object texture. Two types of affinity functions were
used, one defined using the normal (or Gaussian) probability distribution and the other
using the Skew Divergence. This latter, a Kullback-Leibler Divergence variation, is a
measure of the difference between two probability distributions. Finally, the algorithm
was tested in somes videos and also in texture mosaic images composed by images of the
Brodatz album / A segmenta??o de uma imagem tem como objetivo subdividi-la em partes ou objetos
constituintes que tenham algum conte?do sem?ntico relevante. Esta subdivis?o pode
tamb?m ser aplicada a um v?deo, por?m, neste, os objetos est?o presentes nos diversos
quadros que comp?em o v?deo. A tarefa de segmentar uma imagem torna-se mais complexa
quando estas s?o compostas por objetos que contenham caracter?sticas texturais,
com pouca ou nenhuma informa??o de cor. A segmenta??o difusa, do Ingl?s fuzzy, ? uma
t?cnica de segmenta??o por crescimento de regi?es que determina para cada elemento
da imagem um grau de pertin?ncia (entre zero e um) indicando a confian?a de que esse
elemento perten?a a um determinado objeto ou regi?o existente na imagem, fazendo-se
uso de fun??es de afinidade para obter esses valores de pertin?ncia. Neste trabalho ?
apresentada uma modifica??o do algoritmo de segmenta??o fuzzy proposto por Carvalho
[Carvalho et al. 2005], a fim de se obter melhorias na complexidade temporal e espacial.
O algoritmo foi adaptado para segmentar v?deos coloridos tratando-os como volumes 3D.
Para segmentar os v?deos, foram utilizadas informa??es provenientes de um modelo de
cor convencional ou de um modelo h?brido obtido atrav?s de uma metodologia para a
escolha dos melhores canais para realizar a segmenta??o. O algoritmo de segmenta??o
fuzzy foi aplicado tamb?m na segmenta??o de texturas, fazendo-se uso de fun??es de afinidades
adaptativas ?s texturas de cada objeto. Dois tipos de fun??es de afinidades foram
utilizadas, uma utilizando a distribui??o normal de probabilidade, ou Gaussiana, e outra
utilizando a diverg?ncia Skew. Esta ?ltima, uma varia??o da diverg?ncia de Kullback-
Leibler, ? uma medida da diverg?ncia entre duas distribui??es de probabilidades. Por
fim, o algoritmo foi testado com alguns v?deos e tamb?m com imagens de mosaicos de
texturas criadas a partir do ?lbum de Brodatz e outros
|
28 |
Computational Bayesian techniques applied to cosmologyHee, Sonke January 2018 (has links)
This thesis presents work around 3 themes: dark energy, gravitational waves and Bayesian inference. Both dark energy and gravitational wave physics are not yet well constrained. They present interesting challenges for Bayesian inference, which attempts to quantify our knowledge of the universe given our astrophysical data. A dark energy equation of state reconstruction analysis finds that the data favours the vacuum dark energy equation of state $w {=} -1$ model. Deviations from vacuum dark energy are shown to favour the super-negative ‘phantom’ dark energy regime of $w {< } -1$, but at low statistical significance. The constraining power of various datasets is quantified, finding that data constraints peak around redshift $z = 0.2$ due to baryonic acoustic oscillation and supernovae data constraints, whilst cosmic microwave background radiation and Lyman-$\alpha$ forest constraints are less significant. Specific models with a conformal time symmetry in the Friedmann equation and with an additional dark energy component are tested and shown to be competitive to the vacuum dark energy model by Bayesian model selection analysis: that they are not ruled out is believed to be largely due to poor data quality for deciding between existing models. Recent detections of gravitational waves by the LIGO collaboration enable the first gravitational wave tests of general relativity. An existing test in the literature is used and sped up significantly by a novel method developed in this thesis. The test computes posterior odds ratios, and the new method is shown to compute these accurately and efficiently. Compared to computing evidences, the method presented provides an approximate 100 times reduction in the number of likelihood calculations required to compute evidences at a given accuracy. Further testing may identify a significant advance in Bayesian model selection using nested sampling, as the method is completely general and straightforward to implement. We note that efficiency gains are not guaranteed and may be problem specific: further research is needed.
|
Page generated in 0.0578 seconds