Spelling suggestions: "subject:"kolmogorov complexity."" "subject:"olmogorov complexity.""
1 |
Kolmogorov Complexity of GraphsHearn, John 01 May 2006 (has links)
Kolmogorov complexity is a theory based on the premise that the complexity of a binary string can be measured by its compressibility; that is, a string’s complexity is the length of the shortest program that produces that string. We explore applications of this measure to graph theory.
|
2 |
Image Information Distance Analysis and ApplicationsNikvand, Nima January 2014 (has links)
Image similarity or distortion assessment is fundamental to a broad range of applications throughout the field of image processing and machine vision. These include image restoration, denoising, coding, communication, interpolation, registration, fusion, classification and retrieval, as well as object detection, recognition, and tracking. Many existing image similarity measures have been proposed to work with specific types of image distortions (e.g., JPEG compression). There are also methods such as the structural similarity (SSIM) index that are applicable to a wider range of applications.
However, even these "general-purpose" methods offer limited scopes in their applications. For example, SSIM does not apply or work properly when significant geometric changes exist between the two images being compared.
The theory of Kolmogorov complexity provides solid groundwork for a generic information distance metric between any objects that minorizes all metrics in the class. The Normalized Information Distance (NID) metric provides a more useful framework. While appealing, the challenge lies in the implementation, mainly due to the non-computable nature of Kolmogorov complexity. To overcome this, a Normalized Compression Distance (NCD) measure was proposed, which is an effective approximation of NID and has found successful applications in the fields of bioinformatics, pattern recognition, and natural language processing. Nevertheless, the application of NID for image similarity and distortion analysis is still in its early stage. Several authors have applied the NID framework and the NCD algorithm to image clustering, image distinguishability, content-based image retrieval and video classification problems, but most reporting only moderate success. Moreover, due to their focuses on !
specific applications, the generic property of NID was not fully exploited.
In this work, we aim for developing practical solutions for image distortion analysis based on the information distance framework. In particular, we propose two practical approaches to approximate NID for image similarity and distortion analysis. In the first approach, the shortest program that converts one image to another is found from a list of available transformations and a generic image similarity measure is built on computing the length of this shortest program as an approximation of the conditional Kolmogorov complexity in NID. In the second method, the complexity of the objects is approximated using Shannon entropy. Specifically we transform the reference and distorted images into wavelet domain and assume local independence among image subbands. Inspired by the Visual Information Fidelity (VIF) approach, the Gaussian Scale Mixture (GSM) model is adopted for Natural Scene Statistics (NSS) of the images to simplify the entropy computation.
When applying image information distance framework in real-world applications, we find information distance measures often lead to useful features in many image processing applications. In particular, we develop a photo retouching distortion measure based on training a Gaussian kernel Support Vector Regression (SVR) model using information theoretic features extracted from a database of original and edited images. It is shown that the proposed measure is well correlated with subjective ranking of the images. Moreover, we propose a tone mapping operator parameter selection scheme for High Dynamic Range (HDR) images. The scheme attempts to find tone mapping parameters that minimize the NID of the HDR image and the resulting Low Dynamic Range (LDR) image, and thereby minimize the information loss in HDR to LDR tone mapping. The resulting images created by minimizing NID exhibit enhanced image quality.
|
3 |
Decomposition and optimization in near-hierarchical boolean function systems /Masum, Hassan, January 1900 (has links)
Thesis (Ph. D.)--Carleton University, 2003. / Includes bibliographical references (p. 226-237). Also available in electronic format on the Internet.
|
4 |
Information and distancesEpstein, Samuel Randall 23 September 2015 (has links)
We prove all randomized sampling methods produce outliers. Given a computable measure P over natural numbers or infinite binary sequences, there is no method that can produce an arbitrarily large sample such that all its members are typical of P. The second part of this dissertation describes a computationally inexpensive method to approximate Hilbertian distances. This method combines the semi-least squares inverse techinque with the canonical modern machine learning technique known as the kernel trick. In the task of distance approximation, our method was shown to be comparable in performance to a solution employing the Nystrom method. Using the kernel semi-least squares method, we developed and incorporated the Kernel-Subset-Tracker into the Camera Mouse, a video-based mouse replacement software for people with movement disabilities. The Kernel-Subset-Tracker is an exemplar-based method that uses a training set of representative images to produce online templates for positional tracking. Our experiments with test subjects show that augmenting the Camera Mouse with the Kernel-Subset-Tracker improves communication bandwidth statistically significantly.
|
5 |
Partage de secret et théorie algorithmique de l'information / Secret Sharing and Algorithmic Information TheoryKaced, Tarik 04 December 2012 (has links)
Notre travail sur le partage de secret se base sur les points de vue théoriques de la Théorie de l'Information de Shannon et de la Complexité de Kolmogorov. Nous allons expliquer comment ces trois sujets intimement liés.Les inégalité d'information jouent un rôle centrale dans ce manuscrit. Ce sont les inégalités pour l'entropie de Shannon, mais correspondent aussi aux inégalités pour la complexité de Kolmogorov.La complexité de Kolmogorov formalise l'idée d'aléatoire pour les chaînes de caractère. Ce sont là deux raisons qui justifient à elles seules la notion de partage de secret algorithmique dans le cadre de la Théorie Algorithmique de l'information (si l'on sait partager un secret aléatoire, on peut partager n'importe quel secret).Originalement étudié par sa définition combinatoire, le partage de secret a été plus tard généralisé par sa formulation par les quantités définies dans la théorie de l'information. Cette étape a permis l'utilisation des inégalités d'information et s'est révélée très importante dans la caractérisation desschémas de partage de secret efficaces.L'étude des inégalités d'information n'en est qu'à ses débuts. Nous y contribuons en introduisant la notion d'inégalité essentiellement conditionnelles, qui montre une fois de plus que ces inégalités ne sont pas encore complètement comprises. / Our work deals with secret sharing in the theoretical point of views of Shannon's Information Theory and Kolmogorov's Algorithmic Information Theory. We are going to explain how these three subjects are naturally deeply intertwined.Information inequalities play a central role in this text. They are the inequalities for Shannon entropy, but also they are in exact correspondence with the inequalities for Kolmogorov complexity. Kolmogorov complexity formalizes the idea of randomness for strings.These two reasons alone justify to consider the notion of secret sharing in the Algorithmic framework (if one can share a random secret one can share anything).Originally, secret sharing was first studied under the combinatorial lens, only later was it more generally formalized using information-theoretic measures. This step allowed the use of information inequalities which revealed to bevery important to understand the existence of secret-sharing schemes with respect to efficiency.The investigation of information inequalities is at its debut. We contribute to the subject by introducing the notion of essentially conditional inequalities, which shows once again that information inequalities are yet not fully understood.
|
6 |
The Universal Similarity Metric, Applied to Contact Maps Comparison in A Two-Dimensional SpaceRahmati, Sara 27 September 2008 (has links)
Comparing protein structures based on their contact maps is an important problem in structural proteomics. Building a system for reconstructing protein tertiary structures from their contact maps is one of the motivations for devising novel contact map comparison algorithms. Several methods that address the contact map comparison problem have been designed which are briefly discussed in this thesis. However, they suggest scoring schemes that do not satisfy the two characteristics of “metricity” and “universality”. In this research we investigate the applicability of the Universal Similarity Metric (USM) to the contact map comparison problem. The USM is an information theoretical measure which is based on the concept of Kolmogorov complexity. The ultimate goal of this research is to use the USM in case-based reasoning system to predict protein structures from their predicted contact maps. The fact that the contact maps that will be used in such a system are the ones which are predicted from the protein sequences and are not noise-free, implies that we should investigate the noise-sensitivity of the USM. This is the first attempt to study the noise-tolerance of the USM. In this research, as the first implementation of the USM we converted the two-dimensional data structures (contact maps) to one-dimensional data structures (strings). The results of this implementation motivated us to circumvent the dimension reduction in our second attempt to implement the USM. Our suggested method in this thesis has the advantage of obtaining a measure which is noise tolerant. We assess the effectiveness of this noise tolerance by testing different USM implementation schemes against noise-contaminated versions of distinguished data-sets. / Thesis (Master, Computing) -- Queen's University, 2008-09-27 05:53:31.988
|
7 |
Wave-front sensing for adaptive optics in astronomy /Van Dam, Marcos, January 1900 (has links)
Originally issued as author's Ph. D. thesis, University of Canterbury, 2002. / Includes bibliographical references (p. 215-222). Thesis available online from Univ. of Canterbury.
|
8 |
Minimum complexity principle for knowledge transfer in artificial learning / Principe de minimum de complexité pour le transfert de connaissances en apprentissage artificielMurena, Pierre-Alexandre 14 December 2018 (has links)
Les méthodes classiques d'apprentissage automatique reposent souvent sur une hypothèse simple mais restrictive: les données du passé et du présent sont générées selon une même distribution. Cette hypothèse permet de développer directement des garanties théoriques sur la précision de l'apprentissage. Cependant, elle n'est pas réaliste dans un grand nombre de domaines applicatifs qui ont émergé au cours des dernières années.Dans cette thèse, nous nous intéressons à quatre problèmes différents en intelligence artificielle, unis par un point commun: tous impliquent un transfer de connaissance d'un domaine vers un autre. Le premier problème est le raisonnement par analogie et s'intéresse à des assertions de la forme "A est à B ce que C est à D". Le second est l'apprentissage par transfert et se concentre sur des problèmes de classification dans des contextes où les données d'entraînement et de test ne sont pas de même distribution (ou n'appartiennent même pas au même espace). Le troisième est l'apprentissage sur flux de données, qui prend en compte des données apparaissant continument une à une à haute fréquence, avec des changements de distribution. Le dernier est le clustering collaboratif et consiste à faire échanger de l'information entre algorithmes de clusterings pour améliorer la qualité de leurs prédictions.La principale contribution de cette thèse est un cadre général pour traiter les problèmes de transfer. Ce cadre s'appuie sur la notion de complexité de Kolmogorov, qui mesure l'information continue dans un objet. Cet outil est particulièrement adapté au problème de transfert, du fait qu'il ne repose pas sur la notion de probabilité tout en étant capable de modéliser les changements de distributions.En plus de cet effort de modélisation, nous proposons dans cette thèse diverses discussions sur d'autres aspects ou applications de ces problèmes. Ces discussions s'articulent autour de la possibilité de transfert dans différents domaines et peuvent s'appuyer sur d'autres outils que la complexité. / Classical learning methods are often based on a simple but restrictive assumption: The present and future data are generated according to the same distributions. This hypothesis is particularly convenient when it comes to developing theoretical guarantees that the learning is accurate. However, it is not realistic from the point of view of applicative domains that have emerged in the last years.In this thesis, we focus on four distinct problems in artificial intelligence, that have mainly one common point: All of them imply knowledge transfer from one domain to the other. The first problem is analogical reasoning and concerns statements of the form "A is to B as C is to D". The second one is transfer learning and involves classification problem in situations where the training data and test data do not have the same distribution (nor even belong to the same space). The third one is data stream mining, ie. managing data that arrive one by one in a continuous and high-frequency stream with changes in the distributions. The last one is collaborative clustering and focuses on exchange of information between clustering algorithms to improve the quality of their predictions.The main contribution of this thesis is to present a general framework to deal with these transfer problems. This framework is based on the notion of Kolmogorov complexity, which measures the inner information of an object. This tool is particularly adapted to the problem of transfer, since it does not rely on probability distributions while being able to model the changes in the distributions.Apart from this modeling effort, we propose, in this thesis, various discussions on aspects and applications of the different problems of interest. These discussions all concern the possibility of transfer in multiple domains and are not based on complexity only.
|
9 |
Algorithmic Information Theory Applications in Bright Field Microscopy and Epithelial Pattern FormationMohamadlou, Hamid 01 May 2015 (has links)
Algorithmic Information Theory (AIT), also known as Kolmogorov complexity, is a quantitative approach to defining information. AIT is mainly used to measure the amount of information present in the observations of a given phenomenon. In this dissertation we explore the applications of AIT in two case studies. The first examines bright field cell image segmentation and the second examines the information complexity of multicellular patterns. In the first study we demonstrate that our proposed AIT-based algorithm provides an accurate and robust bright field cell segmentation. Cell segmentation is the process of detecting cells in microscopy images, which is usually a challenging task for bright field microscopy due to the low contrast of the images. In the second study, which is the primary contribution of this dissertation, we employ an AIT-based algorithm to quantify the complexity of information content that arises during the development of multicellular organisms. We simulate multicellular organism development by coupling the Gene Regulatory Networks (GRN) within an epithelial field. Our results show that the configuration of GRNs influences the information complexity in the resultant multicellular patterns.
|
10 |
On the assessment of manufacturing systems complexity / Εκτίμηση πολυπλοκότητας συστημάτων παραγωγήςΕυθυμίου, Κωνσταντίνος 12 October 2013 (has links)
Objective of the present study is the development of methods for the assessment of
manufacturing systems complexity and the investigation of flexibility and complexity
relationship. Towards this target, a complete approach based on information theory
permitting the analytical, quantitative and systematic modeling and quantification of both
static and dynamic manufacturing complexity is proposed. Static complexity concerns the
structure of the manufacturing systems, namely the products, the processes, the resources that
constitute the systems as well as their interconnections. Static complexity is treated as the
information that is required for the description of a manufacturing system. Multi domain
matrices modeling the relationships between products, processes and resources are formalized
as networks following the notions of graph theory. The information content of each matrix is
assessed employing Shannon entropy measure and their aggregation yields the static
complexity. Dynamic complexity is related to the uncertainty in the behaviour of a
manufacturing system and in the present study is associated with the unpredictability of the
performance indicators timeseries. The unpredictability of the performance indicators
timeseries, which are provided by computer simulation, is captured employing the Lempel
Ziv algorithm that calculates the Kolmogorov complexity. The dynamic complexity is either
the unpredictability of a specific timeseries or the weighted mean of a series of performance
indicators timeseries produced under different product demand scenarios. The relationship
between flexibility and complexity is investigated for a group of 19 different configurations of
a manufacturing system. In particular, operation flexibility that refers to the system’s ability
to produce a set of products through different machines, materials, operations and sequences
of operations and total complexity, and both static and dynamic are examined employing a
utility function. As a case study, two assembly lines producing three car floor model types at
three different product mixes are investigated. The dynamic complexity of each assembly
line is assessed and the relationship between product mix and dynamic complexity is studied.
The evaluation of the case study revealed the efficiency of the suggested approach validated
its applicability to industrial environments. / Αντικείμενο της παρούσας διατριβής είναι η ανάπτυξη μεθόδων για την εκτίμηση
πολυπλοκότητας συστημάτων παραγωγής και η διερεύνηση της σχέσης ευελιξίας και
πολυπλοκότητας. Προς αυτή την κατεύθυνση προτείνεται μια ολοκληρωμένη προσέγγιση
βασισμένη στην θεωρία της πληροφορίας που επιτρέπει μια αναλυτική, ποσοτικοποιημένη
και συστηματική προτυποποίηση και εκτίμηση τόσο της στατικής όσο και της δυναμικής
πολυπλοκότητας των συστημάτων παραγωγής. Η στατική πολυπλοκότητα αφορά την δομή
των συστημάτων παραγωγής, και σχετίζεται με τα προϊόντα, τις διεργασίες, τους
παραγωγικούς πόρους που αποτελούν το σύστημα καθώς και τις μεταξύ τους σχέσεις. Η
στατική πολυπλοκότητα αντιμετωπίζεται ως η πληροφορία που απαιτείται για να περιγραφεί
ένα σύστημα παραγωγής. Πολυ-πεδιακοί πίνακες αναπαριστούν τις σχέσεις μεταξύ
προϊόντων, διεργασιών και πόρων και προτυποποιούνται ως δίκτυα ακολουθώντας την
θεωρία γράφων. Το πληροφοριακό περιεχόμενο κάθε πίνακα εκτιμάται με την χρήση της
εντροπίας Shannon και το άθροισμα για όλους τους πίνακες δίνει την στατική
πολυπλοκότητα. Η δυναμική πολυπλοκότητα σχετίζεται με την αβεβαιότητα της
συμπεριφοράς των συστημάτων παραγωγής και στην παρούσα διατριβή συνδέεται με την
απροβλεψιμότητα των χρονοσειρών δεικτών απόδοσης ενός συστήματος. Οι χρονοσειρές
των δεικτών απόδοσης προκύπτουν από υπολογιστική προσομοίωση και η απροβλεψιμότητα
τους εκτιμάται με των αλγόριθμο Lempel Ziv ο οποίος υπολογίζει την πολυπλοκότητα
Kolmogorov. Η δυναμική πολυπλοκότητα είναι η απροβλεψιμότητα είτε μιας συγκεκριμένης
χρονοσειράς είτε ο σταθμισμένος μέσος όρος ενός συνόλου χρονοσειρών δεικτών απόδοσης.
Η σχέση ευελιξίας – πολυπλοκότητας διερευνάται για 19 διαμορφώσεις ενός συστήματος
παραγωγής. Συγκεκριμένα, η ευελιξία λειτουργίας που αναφέρεται στην ικανότητα ενός
συστήματος να παράγει ένα σύνολο προϊόντων χρησιμοποιώντας διαφορετικές μηχανές και
διεργασίες και πολυπλοκότητα τόσο η στατική όσο και η δυναμική μελετώνται με μια
συνάρτηση χρησιμότητας. Ως περίπτωση μελέτης εξετάζονται δύο γραμμές συναρμολόγησης
που παράγουν τρία δάπεδα αμαξιού σε τρία μείγματα παραγωγής. Η δυναμική
πολυπλοκότητα κάθε γραμμής και η σχέση μείγματος παραγωγής και δυναμικής
πολυπλοκότητα μελετώνται. Η αξιολόγηση της περίπτωσης μελέτης αποδεικνύει την
αποτελεσματικότητα των προτεινόμενων μεθόδων σε βιομηχανικό περιβάλλον.
|
Page generated in 0.0734 seconds