• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 1
  • 1
  • Tagged with
  • 3
  • 3
  • 2
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
1

INFORMATION-THEORETIC OPTIMIZATION OF WIRELESS SENSOR NETWORKS AND RADAR SYSTEMS

Kim, Hyoung-soo January 2010 (has links)
Three information measures are discussed and used as objective functions for optimization of wireless sensor networks (WSNs) and radar systems. In addition, a long-term system performance measure is developed for evaluating the performance of slow-fading WSNs. Three system applications are considered: a distributed detection system, a distributed multiple hypothesis system, and a radar target recognition system.First, we consider sensor power optimization for distributed binary detection systems. The system communicates over slow-fading orthogonal multiple access channels. In earlier work, it was demonstrated that system performance could be improved by adjusting transmit power to maximize the J-divergence measure of a binary detection system. We define outage probability for slow-fading system as a long-term performance measure, and analytically develop the detection outage with the given system model.Based on the analytical result of the outage probability, diversity gain is derived and shown to be proportional to the number of the sensor nodes. Then, we extend the optimized power control strategy to a distributed multiple hypothesis system, and enhance the power optimization by exploiting a priori probabilities and local sensor statistics. We also extend outage probability to the distributed multiple-hypotheses problem. The third application is radar waveform design with a new performance measure: Task-Specific Information (TSI). TSI is an information-theoretic measure formulated for one or more specific sensor tasks by encoding the task(s) directly into the signal model via source variables. For example, we consider the problem of correctly classifying a linear system from a set of known alternatives, and the source variable takes the form of an indicator vector that selects the transfer function of the true hypothesis. We then compare the performance of TSI with conventional waveforms and other information-theoretic waveform designs via simulation. We apply radar-specific constraints and signal models to the waveform optimization.
2

Identifying Criticality in Market Sentiment: A Data Mining Approach

Sahu, Vaibhav 01 December 2018 (has links)
The aim of this thesis is to study and identify time periods of high activity in commodity and stock market sentiment based on a data mining approach. The method is to develop tools to extract relevant information from web searches and Twitter feeds based on the tally of certain keywords and their combinations at regular intervals. Periods of high activity are identified by a measure of complexity developed for analysis of living systems. Experiments were conducted to see if the measure of activity could be applied as a predictor of changes in stock market and commodity prices.
3

SALZA : mesure d’information universelle entre chaînes pour la classificationet l’inférence de causalité / SALZA : universal information measure between strings for classifiation and causality

Revolle, Marion 25 October 2018 (has links)
Les données sous forme de chaîne de symboles sont très variées (ADN, texte, EEG quantifié,…) et ne sont pas toujours modélisables. Une description universelle des chaînes de symboles indépendante des probabilités est donc nécessaire. La complexité de Kolmogorov a été introduite en 1960 pour répondre à cette problématique. Le concept est simple : une chaîne de symboles est complexe quand il n'en existe pas une description courte. La complexité de Kolmogorov est le pendant algorithmique de l’entropie de Shannon et permet de définir la théorie algorithmique de l’information. Cependant, la complexité de Kolmogorov n’est pas calculable en un temps fini ce qui la rend inutilisable en pratique.Les premiers à rendre opérationnelle la complexité de Kolmogorov sont Lempel et Ziv en 1976 qui proposent de restreindre les opérations de la description. Une autre approche est d’utiliser la taille de la chaîne compressée par un compresseur sans perte. Cependant ces deux estimateurs sont mal définis pour le cas conditionnel et le cas joint, il est donc difficile d'étendre la complexité de Lempel-Ziv ou les compresseurs à la théorie algorithmique de l’information.Partant de ce constat, nous introduisons une nouvelle mesure d’information universelle basée sur la complexité de Lempel-Ziv appelée SALZA. L’implémentation et la bonne définition de notre mesure permettent un calcul efficace des grandeurs de la théorie algorithmique de l’information.Les compresseurs sans perte usuels ont été utilisés par Cilibrasi et Vitányi pour former un classifieur universel très populaire : la distance de compression normalisée [NCD]. Dans le cadre de cette application, nous proposons notre propre estimateur, la NSD, et montrons qu’il s’agit d’une semi-distance universelle sur les chaînes de symboles. La NSD surclasse la NCD en s’adaptant naturellement à davantage de diversité des données et en définissant le conditionnement adapté grâce à SALZA.En utilisant les qualités de prédiction universelle de la complexité de Lempel-Ziv, nous explorons ensuite les questions d’inférence de causalité. Dans un premier temps, les conditions algorithmiques de Markov sont rendues calculables grâce à SALZA. Puis en définissant pour la première l’information dirigée algorithmique, nous proposons une interprétation algorithmique de la causalité de Granger algorithmique. Nous montrons, sur des données synthétiques et réelles, la pertinence de notre approche. / Data in the form of strings are varied (DNA, text, quantify EEG) and cannot always be modeled. A universal description of strings, independent of probabilities, is thus necessary. The Kolmogorov complexity was introduced in 1960 to address the issue. The principle is simple: a string is complex if a short description of it does not exist. The Kolmogorov complexity is the counterpart of the Shannon entropy and defines the algorithmic information theory. Yet, the Kolmogorov complexity is not computable in finit time making it unusable in practice.The first ones to make operational the Kolmogorov complexity are Lempel and Ziv in 1976 who proposed to restrain the operations of the description. Another approach uses the size of the compressed string by a lossless data compression algorithm. Yet these two estimators are not well-defined regarding the joint and conditional complexity cases. So, compressors and Lempel-Ziv complexity are not valuable to estimate algorithmic information theory.In the light of this observation, we introduce a new universal information measure based on the Lempel-Ziv complexity called SALZA. The implementation and the good definition of our measure allow computing efficiently values of the algorithmic information theory.Usual lossless compressors have been used by Cilibrasi and Vitányi to define a very popular universal classifier: the normalized compression distance [NCD]. As part of this application, we introduce our own estimator, called the NSD, and we show that the NSD is a universal semi-distance between strings. NSD surpasses NCD because it gets used to a large data set and uses the adapted conditioning with SALZA.Using the accurate universal prediction quality of the Lempel-Ziv complexity, we explore the question of causality inference. At first, we compute the algorithmic causal Markov condition thanks to SALZA. Then we define, for the first time, the algorithmic directed information and based on it we introduce the algorithmic Granger causality. The relevance of our approach is demonstrated on real and synthetic data.

Page generated in 0.1674 seconds