Spelling suggestions: "subject:"[een] MUTUAL INFORMATION"" "subject:"[enn] MUTUAL INFORMATION""
41 |
DESIGN OF AN FPGA-BASED COMPUTING PLATFORM FOR REAL-TIME 3D MEDICAL IMAGINGLi, Jianchun 19 January 2005 (has links)
No description available.
|
42 |
STUDY ON INFORMATION THEORY: CONNECTION TO CONTROL THEORY, APPROACH AND ANALYSIS FOR COMPUTATIONTheeranaew, Wanchat 09 February 2015 (has links)
No description available.
|
43 |
Analysis of Rank Distance for Malware ClassificationSubramanian, Nandita January 2016 (has links)
No description available.
|
44 |
Using Kullback-Leibler Divergence to Analyze the Performance of Collaborative PositioningNounagnon, Jeannette Donan 12 July 2016 (has links)
Geolocation accuracy is a very crucial and a life-or-death factor for rescue teams. Natural disasters or man-made disasters are just a few convincing reasons why fast and accurate position location is necessary. One way to unleash the potential of positioning systems is through the use of collaborative positioning. It consists of simultaneously solving for the position of two nodes that need to locate themselves. Although the literature has addressed the benefits of collaborative positioning in terms of accuracy, a theoretical foundation on the performance of collaborative positioning has been disproportionally lacking.
This dissertation uses information theory to perform a theoretical analysis of the value of collaborative positioning.The main research problem addressed states: 'Is collaboration always beneficial? If not, can we determine theoretically when it is and when it is not?' We show that the immediate advantage of collaborative estimation is in the acquisition of another set of information between the collaborating nodes. This acquisition of new information reduces the uncertainty on the localization of both nodes. Under certain conditions, this reduction in uncertainty occurs for both nodes by the same amount. Hence collaboration is beneficial in terms of uncertainty.
However, reduced uncertainty does not necessarily imply improved accuracy. So, we define a novel theoretical model to analyze the improvement in accuracy due to collaboration. Using this model, we introduce a variational analysis of collaborative positioning to deter- mine factors that affect the improvement in accuracy due to collaboration. We derive range conditions when collaborative positioning starts to degrade the performance of standalone positioning. We derive and test criteria to determine on-the-fly (ahead of time) whether it is worth collaborating or not in order to improve accuracy.
The potential applications of this research include, but are not limited to: intelligent positioning systems, collaborating manned and unmanned vehicles, and improvement of GPS applications. / Ph. D.
|
45 |
Spatially Resolved Hydration Statistical Mechanics at Biomolecular Surfaces from Atomistic SimulationsHeinz, Leonard 13 December 2021 (has links)
No description available.
|
46 |
Der maximale Lyapunov Exponent / Methodische Beiträge zur Theorie und Anwendung in der SportwissenschaftSchroll, Arno 21 October 2020 (has links)
Bewegungsstabilität wird durch die Fähigkeit des neuromuskulären Systems adäquat auf Störungen der Bewegung antworten zu können erreicht. Einschränkungen der Stabilität werden z. B. mit Sturzrisiko in Verbindung gebracht, was schwere Konsequenzen für die Lebensqualität und Kosten im Gesundheitssystem hat. Nach wie vor wird debattiert, wie eine geeignete Bewertung von Stabilität vorgenommen werden kann. Diese Arbeit behandelt den maximalen Lyapunov Exponenten. Er drückt aus, wie sensitiv das System auf kleine Störungen eines Zustands reagiert. Eine Zeitreihe wird zunächst mittels zeitversetzter Kopien in einen mehrdimensionalen Raum eingebettet. In dieser rekonstruierten Dynamik berechnet man dann die Steigung der mittleren logarithmischen Divergenz initial naher Punkte. Die methodischen Konsequenzen für die Anwendung dieser Systemtheorie auf Bewegungen sind jedoch bislang unzureichend beleuchtet. Der experimentelle Teil zeigt klare Indizien, dass es bei Bewegungen weniger um die Analyse eines komplexen Systemdeterminismus geht, sondern um verschieden hohe dynamische Rauschlevel. Je höher das Rauschlevel, desto instabiler das System. Anwendung von Rauschreduktion führt zu kleineren Effektstärken. Das hat Folgen: Die Funktionswerte der Average Mutual Information, die bisher nur zur Bestimmung des Zeitversatzes genutzt wurden, können bereits Unterschiede in der Stabilität zeigen. Die Abschätzung der Dimension für die Einbettung (unabhängig vom verwendeten Algorithmus), ist stark von der Länge der Zeitreihe abhängig und wird bisher eher überschätzt. Die größten Effekte sind in Dimension drei zu beobachten und ein sehr früher Bereich zur Auswertung der Divergenzkurve ist zu empfehlen. Damit wird eine effiziente und standardisierte Analyse vorgeschlagen, die zudem besser imstande ist, Unterschiede verschiedener Bedingungen oder Gruppen aufzuzeigen. / Reductions of movement stability due to impairments of the motor system to respond adequately to perturbations are associated with e. g. the risk of fall. This has consequences for quality of life and costs in health care. However, there is still an debate on how to measure stability. This thesis examines the maximum Lyapunov exponent, which became popular in sports science the last two decades. The exponent quantifies how sensitive a system is reacting to small perturbations. A measured data series and its time delayed copies are embedded in a moredimensional space and the exponent is calculated with respect to this reconstructed dynamic as average slope of the logarithmic divergence curve of initially nearby points. Hence, it provides a measure on how fast two at times near trajectories of cyclic movements depart. The literature yet shows a lack of knowledge about the consequences of applying this system theory to sports science tasks. The experimental part shows strong evidence that, in the evaluation of movements, the exponent is less about a complex determinism than simply the level of dynamic noise present in time series. The higher the level of noise, the lower the stability of the system. Applying noise reduction therefore leads to reduced effect sizes. This has consequences: the values of average mutual information, which are until now only used for calculating the delay for the embedding, can already show differences in stability. Furthermore, it could be shown that the estimation of the embedding dimension d (independently of algorithm), is dependent on the length of the data series and values of d are currently overestimated. The greatest effect sizes were observed in dimension three and it can be recommended to use the very first beginning of the divergence curve for the linear fit. These findings pioneer a more efficient and standardized approach of stability analysis and can improve the ability of showing differences between conditions or groups.
|
47 |
[en] RÉNYI ENTROPY AND CAUCHY-SCHWARTZ MUTUAL INFORMATION APPLIED TO THE MIFS-U VARIABLES SELECTION ALGORITHM: A COMPARATIVE STUDY / [pt] ENTROPIA DE RÉNYI E INFORMAÇÃO MÚTUA DE CAUCHY-SCHWARTZ APLICADAS AO ALGORITMO DE SELEÇÃO DE VARIÁVEIS MIFS-U: UM ESTUDO COMPARATIVOLEONARDO BARROSO GONCALVES 08 September 2008 (has links)
[pt] A presente dissertação aborda o algoritmo de Seleção de
Variáveis Baseada em Informação Mútua sob Distribuição de
Informação Uniforme (MIFS-U) e expõe um método alternativo
para estimação da entropia e da informação mútua, medidas
que constituem a base deste algoritmo de seleção.
Este método tem, por fundamento, a informação mútua
quadrática de Cauchy-Schwartz e a entropia quadrática de
Rényi, combinada, no caso de variáveis contínuas, ao método
de estimação de densidade Janela de Parzen. Foram
realizados experimentos com dados reais de domínio público,
sendo tal método comparado com outro, largamente utilizado,
que adota a definição de entropia de Shannon e faz uso, no
caso de variáveis contínuas, do estimador de densidade
histograma. Os resultados mostram pequenas variações entre
os dois métodos, mas que sugerem uma investigação futura
através de um classificador, tal como Redes Neurais, para
avaliar qualitativamente tais resultados à luz do objetivo
final que consiste na maior exatidão de classificação. / [en] This dissertation approaches the algorithm of Selection of
Variables under Mutual Information with Uniform Distribution
(MIFS-U) and presents an alternative method for estimate
entropy and mutual information, measures that
constitute the base of this selection algorithm. This method
has, for foundation, the Cauchy-Schwartz quadratic mutual
information and the quadratic Rényi entropy, combined, in
the case of continuous variables, with Parzen Window
density estimation. Experiments were accomplished with real
public domain data, being such method compared with other,
broadly used, that adopts the Shannon entropy definition and
makes use, in the case of continuous variables, of the
histogram density estimator The results show small
variations among the two methods, what suggests a future
investigation through a classifier, such as Neural
Networks, to evaluate this results, qualitatively, in the
light of the final objective that consists of the biggest
sort exactness.
|
48 |
Um algoritmo eficiente para o crescimento de redes sobre o grafo probabilístico completo do sistema de regulação gênica considerado / An efficient algorithm for growing networks on the regulatory gene system complete random graphLima, Leandro de Araujo 10 August 2009 (has links)
Sabe-se biologicamente que o nível de expressão dos genes está entre os fatores podem indicar o quanto estes estão em atividade em determinado momento. Avanços na tecnologia de microarray têm possibilitado medir os níveis de expressão de milhares de genes ao mesmo tempo. Esses dados podem ser medidos de maneira a formarem uma série temporal, que pode ser tratada estatisticamente para serem obtidas informações sobre as relações entre os genes. Já foram propostos vários modelos para tratar redes gênicas matematicamente. Esses modelos têm evoluído de forma a agregarem cada vez mais características das redes reais. Neste trabalho, será feita uma revisão de modelos discretos para redes de regulação gênica, primeiramente com as redes Booleanas, modelo determinístico, e depois as redes Booleanas probabilísticas e as redes genéticas probabilísticas, modelos que tratam o problema estocasticamente. Usando o último modelo citado, serão mostrados dois métodos para estimar o nível de predição entre os genes, coeficiente de determinação e informação mútua. Além de se estimar essas relações, foram desenvolvidas algumas técnicas para construir redes a partir de genes específicos, que são chamados sementes. Também serão apresentados dois desses métodos de crescimento de redes e, baseado neles, um terceiro método que foi desenvolvido neste trabalho. Foi criado um algoritmo que realiza o crescimento da rede mudando as sementes a cada iteração, agrupando estes genes em grupos com diferentes níveis de confiança, chamados camadas. O algoritmo também usa outros critérios para agregar novos genes à rede. Após a explanação desses métodos, será mostrado um software que, a partir de dados temporais de expressão gênica, estima as dependências entre os genes e executa o crescimento da rede em torno de genes que se deseje estudar. Também serão mostradas as melhorias feitas no programa. Ao final, serão apresentados alguns testes feitos com dados do Plasmodium falciparum, parasita causador da malária. / It\'s known that gene expression levels are among the factors that can show how genes are active in certain moment. Advances in microarray technology have given the possibility to measure expression levels of thousands of genes in a certain instant of time. These data constitute time series that we can treat statistically in order to get information genes relationships. Many models were proposed to treat gene networks mathematically. These models have evolved to aggregate more and more real networks features. In this work, it is made a brief review of discrete models of regulatory genetic networks, initially Boolean networks, a deterministic model, and then probabilistic Boolean networks and probabilistic genetic networks, models that treat the problem stochastically. Using the last model cited, two methods to estimate the prediction level among genes are shown, coefficient of determination and mutual information. Besides estimating these relations, some techniques have been developed to construct networks from specific genes, that are called seeds. It will be also shown two methods of network growth and, based on these, a third method that was developed during this work. An algorithm was created, such that it grows the network changing the seeds in each iteration, grouping these genes in groups with different level of confidence, called layers. The algorithm also uses other criteria to add new genes to the network. After studying these methods, it will be shown a software that, using time series gene expression data, estimates dependences among genes and runs the network growing process around chosen genes. It is also presented the improvements made in the program. Finally, some tests using data of Plasmodium falciparum, malaria parasite, are shown.
|
49 |
Efficacité, généricité et praticabilité de l'attaque par information mutuelle utilisant la méthode d'estimation de densité par noyau / Efficiency, genericity and practicability of Kerned-based mutual information analysisCarbone, Mathieu 16 March 2015 (has links)
De nos jours, les attaques par canaux auxiliaires sont facilement réalisables et très puissantes face aux implémentations cryptographiques. Cela pose une sérieuse menace en ce qui concerne la sécurité des crypto-systèmes. En effet, l'exécution d'un algorithme cryptographique produit inévitablement des fuites d'information liées aux données internes manipulées par le cryptosystèmes à travers des canaux auxiliaires (temps, température, consommation de courant, émissions électro-magnétiques, etc.). Certaines d'entre elles étant sensibles, un attaquant peut donc les exploiter afin de retrouver la clé secrète. Une des étapes les plus importantes d'une attaque par canaux auxiliaires est de quantifier la dépendance entre une quantité physique mesurée et un modèle de fuite supposé. Pour se faire, un outil statistique, aussi appelé distingueur, est utilisé dans le but de trouver une estimation de la clé secrète. Dans la littérature, une pléthore de distingueurs a été proposée. Cette thèse porte sur l'attaque utilisant l'information mutuelle comme distingueur, appelé l'attaque par information mutuelle. Dans un premier temps, nous proposons de combler le fossé d'un des problèmes majeurs concernant l'estimation du coefficient d'information mutuelle, lui-même demandant l'estimation de densité. Nos investigations ont été menées en utilisant une méthode non paramétrique pour l'estimation de densité: l'estimation par noyau. Une approche de sélection de la largeur de fenêtre basée sur l'adaptativité est proposée sous forme d'un critère (spécifique au cas des attaques par canaux auxiliaires). Par conséquent, une analyse est menée pour donner une ligne directrice afin de rendre l'attaque par information mutuelle optimale et générique selon la largeur de fenêtre mais aussi d'établir quel contexte (relié au moment statistique de la fuite) est plus favorable pour l'attaque par information mutuelle. Dans un second temps, nous abordons un autre problème lié au temps de calcul élevé (étroitement lié à la largeur de la fenêtre) de l'attaque par information mutuelle utilisant la méthode du noyau. Nous évaluons un algorithme appelé Arbre Dual permettant des évaluations rapides de fonctions noyau. Nous avons aussi montré expérimentalement que l'attaque par information mutuelle dans le domaine fréquentiel, est efficace et rapide quand celle-ci est combinée avec l'utilisation d'un modèle fréquentiel de fuite. En outre, nous avons aussi suggéré une extension d'une méthode déjà existante pour détecter une fuite basée sur un moment statistique d'ordre supérieur. / Nowadays, Side-Channel Analysis (SCA) are easy-to-implement whilst powerful attacks against cryptographic implementations posing a serious threat to the security of cryptosystems for the designers. Indeed, the execution of cryptographic algorithms unvoidably leaks information about internally manipulated data of the cryptosystem through side-channels (time, temperature, power consumption, electromagnetic emanations, etc), for which some of them are sensible(depending on the secret key). One of the most important SCA steps for an adversary is to quantify the dependency between the measured side-channel leakage and an assumed leakage model using a statistical tool, also called distinguisher, in order to find an estimation of the secret key. In the SCA literature, a plethora of distinguishers have been proposed. This thesis focuses on Mutual Information (MI) based attacks, the so-called Mutual Information Analysis (MIA) and proposes to fill the gap of the major practical issue consisting in estimating MI index which itself requires the estimation of underlying distributions. Investigations are conducted using the popular statistical technique for estimating the underlying density distribution with minimal assumptions: Kernel Density Estimation (KDE). First, a bandwidth selection scheme based on an adaptivity criterion is proposed. This criterion is specific to SCA.As a result, an in-depth analysis is conducted in order to provide a guideline to make MIA efficient and generic with respect to this tuning hyperparameter but also to establish which attack context (connected to the statistical moment of leakage) is favorable of MIA. Then, we address another issue of the kernel-based MIA lying in the computational burden through a so-called Dual-Tree algorithm allowing fast evaluations of 'pair-wise` kernel functions. We also showed that MIA running into the frequency domain is really effective and fast when combined with the use of an accurate frequency leakage model. Additionally, we suggested an extension of an existing method to detect leakage embedded on higher-order statistical moments.
|
50 |
Agrupamento de textos utilizando divergência Kullback-Leibler / Texts grouping using Kullback-Leibler divergenceWillian Darwin Junior 22 February 2016 (has links)
O presente trabalho propõe uma metodologia para agrupamento de textos que possa ser utilizada tanto em busca textual em geral como mais especificamente na distribuição de processos jurídicos para fins de redução do tempo de resolução de conflitos judiciais. A metodologia proposta utiliza a divergência Kullback-Leibler aplicada às distribuições de frequência dos radicais (semantemas) das palavras presentes nos textos. Diversos grupos de radicais são considerados, formados a partir da frequência com que ocorrem entre os textos, e as distribuições são tomadas em relação a cada um desses grupos. Para cada grupo, as divergências são calculadas em relação à distribuição de um texto de referência formado pela agregação de todos os textos da amostra, resultando em um valor para cada texto em relação a cada grupo de radicais. Ao final, esses valores são utilizados como atributos de cada texto em um processo de clusterização utilizando uma implementação do algoritmo K-Means, resultando no agrupamento dos textos. A metodologia é testada em exemplos simples de bancada e aplicada a casos concretos de registros de falhas elétricas, de textos com temas em comum e de textos jurídicos e o resultado é comparado com uma classificação realizada por um especialista. Como subprodutos da pesquisa realizada, foram gerados um ambiente gráfico de desenvolvimento de modelos baseados em Reconhecimento de Padrões e Redes Bayesianas e um estudo das possibilidades de utilização de processamento paralelo na aprendizagem de Redes Bayesianas. / This work proposes a methodology for grouping texts for the purposes of textual searching in general but also specifically for aiding in distributing law processes in order to reduce time applied in solving judicial conflicts. The proposed methodology uses the Kullback-Leibler divergence applied to frequency distributions of word stems occurring in the texts. Several groups of stems are considered, built up on their occurrence frequency among the texts and the resulting distributions are taken regarding each one of those groups. For each group, divergences are computed based on the distribution taken from a reference text originated from the assembling of all sample texts, yelding one value for each text in relation to each group of stems. Finally, those values are taken as attributes of each text in a clusterization process driven by a K-Means algorithm implementation providing a grouping for the texts. The methodology is tested for simple toy examples and applied to cases of electrical failure registering, texts with similar issues and law texts and compared to an expert\'s classification. As byproducts from the conducted research, a graphical development environment for Pattern Recognition and Bayesian Networks based models and a study on the possibilities of using parallel processing in Bayesian Networks learning have also been obtained.
|
Page generated in 0.0732 seconds