• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 750
  • 163
  • 104
  • 70
  • 57
  • 37
  • 19
  • 16
  • 15
  • 12
  • 11
  • 9
  • 9
  • 7
  • 6
  • Tagged with
  • 1540
  • 174
  • 141
  • 128
  • 125
  • 122
  • 118
  • 118
  • 113
  • 92
  • 92
  • 91
  • 83
  • 79
  • 78
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
351

Hypercube coloring and the structure of binary codes

Rix, James Gregory 11 1900 (has links)
A coloring of a graph is an assignment of colors to its vertices so that no two adjacent vertices are given the same color. The chromatic number of a graph is the least number of colors needed to color all of its vertices. Graph coloring problems can be applied to many real world applications, such as scheduling and register allocation. Computationally, the decision problem of whether a general graph is m-colorable is NP-complete for m ≥ 3. The graph studied in this thesis is a well-known combinatorial object, the k-dimensional hypercube, Qk. The hypercube itself is 2-colorable for all k; however, coloring the square of the cube is a much more interesting problem. This is the graph in which the vertices are binary vectors of length k, and two vertices are adjacent if and only if the Hamming distance between the two vectors is at most 2. Any color class in a coloring of Q2k is a binary (k;M, 3) code. This thesis will begin with an introduction to binary codes and their structure. One of the most fundamental combinatorial problems is finding optimal binary codes, that is, binary codes with the maximum cardinality satisfying a specified length and minimum distance. Many upper and lower bounds have been produced, and we will analyze and apply several of these. This leads to many interesting results about the chromatic number of the square of the cube. The smallest k for which the chromatic number of Q2k is unknown is k = 8; however, it can be determined that this value is either 13 or 14. Computational approaches to determine the chromatic number of Q28 were performed. We were unable to determine whether 13 or 14 is the true value; however, much valuable insight was learned about the structure of this graph and the computational difficulty that lies within. Since a 13-coloring of Q28 must have between 9 and 12 color classes being (8; 20; 3) binary codes, this led to a thorough investigation of the structure of such binary codes.
352

The Binary String-to-String Correction Problem

Spreen, Thomas D. 30 August 2013 (has links)
String-to-String Correction is the process of transforming some mutable string M into an exact copy of some other string (the target string T), using a shortest sequence of well-defined edit operations. The formal STRING-TO-STRING CORRECTION problem asks for the optimal solution using just two operations: symbol deletion, and swap of adjacent symbols. String correction problems using only swaps and deletions are computationally interesting; in his paper On the Complexity of the Extended String-to-String Correction Problem (1975), Robert Wagner proved that the String-to-String Correction problem under swap and deletion operations only is NP-complete for unbounded alphabets. In this thesis, we present the first careful examination of the binary-alphabet case, which we call Binary String-to-String Correction (BSSC). We present several special cases of BSSC for which an optimal solution can be found in polynomial time; in particular, the case where T and M have an equal number of occurrences of a given symbol has a polynomial-time solution. As well, we demonstrate and prove several properties of BSSC, some of which do not necessarily hold in the case of String-to-String Correction. For instance: that the order of operations is irrelevant; that symbols in the mutable string, if swapped, will only ever swap in one direction; that the length of the Longest Common Subsequence (LCS) of the two strings is monotone nondecreasing during the execution of an optimal solution; and that there exists no correlation between the effect of a swap or delete operation on LCS, and the optimality of that operation. About a dozen other results that are applicable to Binary String-to-String Correction will also be presented. / Graduate / 0984 / 0715 / tspreen@gmail.com
353

Le routage dans les réseaux DTN : du cas pratique des réseaux satellitaires quasi-déterministes à la modélisation théorique / Routing in DTN : from the practical case of quasi-deterministic satellite networks to theoritical modeling

Diana, Rémi 06 December 2012 (has links)
Les communications par satellites sont l’aboutissement de recherches menées dans les domaines de télécommunications et des technologies spatiales depuis plus de 50 ans. Les premiers satellites souffraient d’un coût exorbitant pour des performances très limitées. Les avancées technologiques apparues dans ces domaines ont permis de rendre ce rapport satisfaisant et commercialement viable ce qui a permis de multiplier leurs lancements et ainsi de mettre en place de véritables réseaux de satellites. À ce jour, il existe de nombreuses constellations de satellites géostationnaires et d’orbites basses utilisées à des fins civiles ou militaires. De manière générale, le routage au sein de ces constellations s’effectue suivant un pré-calcul des routes existantes qui est alors utilisé sur une période donnée et rafraîchi si besoin. Ce type de routage n’étant optimal que sur des topologies déterministes, nous sommes donc amenés à considérer d’autres solutions si l’on relaxe cette hypothèse. L’objectif de cette thèse est d’explorer les alternatives possibles au routage pré-calculé. En tant que piste potentielle, nous proposons de vérifier l’adéquation des protocoles de routage à réplication issus du monde des réseaux tolérants au délai, DTN, aux constellations de satellites. Afin de nous offrir un cadre d’étude pertinent à la vue de cet objectif, nous nous focalisons sur une constellation particulière à caractère quasi-déterministe n’offrant pas une connectivité directe entre tous les nœuds du système. Dans une deuxième partie nous nous intéressons à la modélisation du protocole de routage Binary Spray and Wait. Nous développons un modèle capable de déterminer théoriquement la distribution du délai d’acheminement pour tout type de réseau, homogène et hétérogène. / Satellite communication is the achievement of more than 50 years of research in the fields of telecommunications and space technologies.First satellites had exorbitant costs for very limited performances. Technological advances occurred in these areas have helped them to become commercially feasible and satisfying. This enable the increase of satellite launches and thus, building complete satellite networks.Today, there are many GEO or LEO satellite constellations used for civilian or military applications. In general, routing in these constellations is done by pre-computing existing routes. These routes are then used for a given period and refreshed if needed. This type of routing is optimal only on deterministic topologies as a consequence we need to consider other solutions if we relax this assumption. The objective of this thesis is to explore alternatives to pre-computed routing. As a potential solution, we propose to assess the suitability of replication based routing protocols issued from the world of delay tolerant networks, DTN. To provide a relevant framework to study this topic, we focus on a particular constellation that present a quasi-deterministic nature and do not provide direct connectivity between all nodes of the system. In a second part, we focus on the modeling of the Binary Spray and Wait, routing protocol. We develop a model that can theoretically determine the distribution of end-to-end delay for any type of network, homogeneous and heterogeneous. Finally, we present a possible use of this model to conduct more in-depth theoretical analysis.
354

A Comparison of Machine Learning Techniques for Facial Expression Recognition

Deaney, Mogammat Waleed January 2018 (has links)
Magister Scientiae - MSc (Computer Science) / A machine translation system that can convert South African Sign Language (SASL) video to audio or text and vice versa would be bene cial to people who use SASL to communicate. Five fundamental parameters are associated with sign language gestures, these are: hand location; hand orientation; hand shape; hand movement and facial expressions. The aim of this research is to recognise facial expressions and to compare both feature descriptors and machine learning techniques. This research used the Design Science Research (DSR) methodology. A DSR artefact was built which consisted of two phases. The rst phase compared local binary patterns (LBP), compound local binary patterns (CLBP) and histogram of oriented gradients (HOG) using support vector machines (SVM). The second phase compared the SVM to arti cial neural networks (ANN) and random forests (RF) using the most promising feature descriptor|HOG|from the rst phase. The performance was evaluated in terms of accuracy, robustness to classes, robustness to subjects and ability to generalise on both the Binghamton University 3D facial expression (BU-3DFE) and Cohn Kanade (CK) datasets. The evaluation rst phase showed HOG to be the best feature descriptor followed by CLBP and LBP. The second showed ANN to be the best choice of machine learning technique closely followed by the SVM and RF.
355

Hypercube coloring and the structure of binary codes

Rix, James Gregory 11 1900 (has links)
A coloring of a graph is an assignment of colors to its vertices so that no two adjacent vertices are given the same color. The chromatic number of a graph is the least number of colors needed to color all of its vertices. Graph coloring problems can be applied to many real world applications, such as scheduling and register allocation. Computationally, the decision problem of whether a general graph is m-colorable is NP-complete for m ≥ 3. The graph studied in this thesis is a well-known combinatorial object, the k-dimensional hypercube, Qk. The hypercube itself is 2-colorable for all k; however, coloring the square of the cube is a much more interesting problem. This is the graph in which the vertices are binary vectors of length k, and two vertices are adjacent if and only if the Hamming distance between the two vectors is at most 2. Any color class in a coloring of Q2k is a binary (k;M, 3) code. This thesis will begin with an introduction to binary codes and their structure. One of the most fundamental combinatorial problems is finding optimal binary codes, that is, binary codes with the maximum cardinality satisfying a specified length and minimum distance. Many upper and lower bounds have been produced, and we will analyze and apply several of these. This leads to many interesting results about the chromatic number of the square of the cube. The smallest k for which the chromatic number of Q2k is unknown is k = 8; however, it can be determined that this value is either 13 or 14. Computational approaches to determine the chromatic number of Q28 were performed. We were unable to determine whether 13 or 14 is the true value; however, much valuable insight was learned about the structure of this graph and the computational difficulty that lies within. Since a 13-coloring of Q28 must have between 9 and 12 color classes being (8; 20; 3) binary codes, this led to a thorough investigation of the structure of such binary codes. / Graduate Studies, College of (Okanagan) / Graduate
356

Semantic monitoring mechanisms dedicated to security monitoring in IaaS cloud / Mécanismes de monitoring sémantique dédiés à la sécurité des infrastructures cloud IaaS

Hebbal, Yacine 18 September 2017 (has links)
L’introspection de machine virtuelle (VM) consiste à superviser les états et les activités de celles-ci depuis la couche de virtualisation, tirant ainsi avantage de son emplacement qui offre à la fois une bonne visibilité des états et des activités des VMs ainsi qu’une bonne isolation de ces dernières. Cependant, les états et les activités des VMs à superviser sont vus par la couche de virtualisation comme une suite binaire de bits et d’octets en plus des états des ressources virtuelles. L’écart entre la vue brute disponible à la couche de virtualisation et celle nécessaire pour la supervision de sécurité des VMs constitue un challenge pour l’introspection appelé « le fossé sémantique ». Pour obtenir des informations sémantiques sur les états et les activités des VMs à fin de superviser leur sécurité, nous présentons dans cette thèse un ensemble de techniques basé sur l’analyse binaire et la réutilisation du code binaire du noyau d’une VM. Ces techniques permettent d’identifier les adresses et les noms de la plupart des fonctions noyau d’une VM puis de les instrumenter (intercepter, appeler et analyser) pour franchir le fossé sémantique de manière automatique et efficiente même dans les cas des optimisations du compilateur et de la randomisation de l’emplacement du code noyau dans la mémoire de la VM. / Virtual Machine Introspection (VMI) consists inmonitoring VMs security from the hypervisor layer which offers thanks to its location a strong visibility on their activities in addition to a strong isolation from them. However, hypervisor view of VMs is just raw bits and bytes in addition to hardware states. The semantic difference between this raw view and the one needed for VM security monitoring presents a significant challenge for VMI called “the semantic gap”. In order to obtain semantic information about VM states and activities for monitoring their security from the hypervisor layer, we present in this thesis a set of techniques based on analysis and reuse of VM kernel binary code. These techniques enable to identify addresses and names of most VM kernel functions then instrument (call, intercept and analyze) them to automatically bridge the semantic gap regardless of challenges presented by compiler optimizations and kernel base address randomization.
357

Towards optimal local binary patterns in texture and face description

Ylioinas, J. (Juha) 15 November 2016 (has links)
Abstract Local binary patterns (LBP) are among the most popular image description methods and have been successfully applied in a diverse set of computer vision problems, covering texture classification, material categorization, face recognition, and image segmentation, to name only a few. The popularity of the LBP methodology can be verified by inspecting the number of existing studies about its different variations and extensions. The number of those studies is vast. Currently, the methodology has been acknowledged as one of the milestones in face recognition research. The starting point of this research is to gain more understanding of which principles the original LBP descriptor is based on. After gaining some degree of insight, yet another try is made to improve some steps of the LBP pipeline, consisted of image pre-processing, pattern sampling, pattern encoding, binning, and further histogram post-processing. The main contribution of this thesis is a bunch of novel LBP extensions that partly try to unify some of the existing derivatives and extensions. The basis for the design of the new additional LBP methodology is to maximise data-driven premises, at the same time minimizing the need for tuning by hand. Prior to local binary pattern extraction, the thesis presents an image upsampling step dubbed as image pre-interpolation. As a natural consequence of upsampling, a greater number of patterns can be extracted and binned to a histogram improving the representational performance of the final descriptor. To improve the following two steps of the LBP pipeline, namely pattern sampling and encoding, three different learning-based methods are introduced. Finally, a unifying model is presented for the last step of the LBP pipeline, namely for local binary pattern histogram post-processing. As a special case of this, a novel histogram smoothing scheme is proposed, which shares the motivation and the effects with the image pre-interpolation for the most of its part. Deriving descriptors for such face recognition problems as face verification or age estimation has been and continues to be among the most popular domains where LBP has ever been applied. This study is not an exception in that regard as the main investigations and conclusions here are made on the basis of how the proposed LBP variations perform especially in the problems of face recognition. The experimental part of the study demonstrates that the proposed methods, experimentally validated using publicly available texture and face datasets, yield results comparable to the best performing LBP variants found in the literature, reported with the corresponding benchmarks. / Tiivistelmä Paikalliset binäärikuviot kuuluvat suosituimpiin menetelmiin kuville suoritettavassa piirteenirrotuksessa. Menetelmää on sovellettu moniin konenäön ongelmiin, kuten tekstuurien luokittelu, materiaalien luokittelu, kasvojen tunnistus ja kuvien segmentointi. Menetelmän suosiota kuvastaa hyvin siitä kehitettyjen erilaisten johdannaisten suuri lukumäärä ja se, että nykyään kyseinen menetelmien perhe on tunnustettu yhdeksi virstanpylvääksi kasvojentunnistuksen tutkimusalueella. Tämän tutkimuksen lähtökohtana on ymmärtää periaatteita, joihin tehokkaimpien paikallisten binäärikuvioiden suorituskyky perustuu. Tämän jälkeen tavoitteena on kehittää parannuksia menetelmän eri askelille, joita ovat kuvan esikäsittely, binäärikuvioiden näytteistys ja enkoodaus, sekä histogrammin koostaminen ja jälkikäsittely. Esiteltävien uusien menetelmien lähtökohtana on hyödyntää mahdollisimman paljon kohdesovelluksesta saatavaa tietoa automaattisesti. Ensimmäisenä menetelmänä esitellään kuvan ylösnäytteistykseen perustuva paikallisten binäärikuvioiden johdannainen. Ylösnäytteistyksen luonnollisena seurauksena saadaan näytteistettyä enemmän binäärikuvioita, jotka histogrammiin koottuna tekevät piirrevektorista alkuperäistä erottelevamman. Seuraavaksi esitellään kolme oppimiseen perustuvaa menetelmää paikallisten binäärikuvioiden laskemiseksi ja niiden enkoodaukseen. Lopuksi esitellään paikallisten binäärikuvioiden histogrammin jälkikäsittelyn yleistävä malli. Tähän malliin liittyen esitellään histogrammin silottamiseen tarkoitettu operaatio, jonka eräs tärkeimmistä motivaatioista on sama kuin kuvan ylösnäytteistämiseen perustuvalla johdannaisella. Erilaisten piirteenirrotusmenetelmien kehittäminen kasvojentunnistuksen osa-alueille on erittäin suosittu paikallisten binäärikuvioiden sovellusalue. Myös tässä työssä tutkittiin miten kehitetyt johdannaiset suoriutuvat näissä osa-ongelmissa. Tutkimuksen kokeellinen osuus ja siihen liittyvät numeeriset tulokset osoittavat, että esitellyt menetelmät ovat vertailukelpoisia kirjallisuudesta löytyvien parhaimpien paikallisten binäärikuvioiden johdannaisten kanssa.
358

Correlation attacks on stream ciphers using convolutional codes

Bruwer, Christian S 24 January 2006 (has links)
This dissertation investigates four methods for attacking stream ciphers that are based on nonlinear combining generators: -- Two exhaustive-search correlation attacks, based on the binary derivative and the Lempel-Ziv complexity measure. -- A fast-correlation attack utilizing the Viterbi algorithm -- A decimation attack, that can be combined with any of the above three attacks. These are ciphertext-only attacks that exploit the correlation that occurs between the ciphertext and an internal linear feedback shift-register (LFSR) of a stream cipher. This leads to a so-called divide and conquer attack that is able to reconstruct the secret initial states of all the internal LFSRs within the stream cipher. The binary derivative attack and the Lempel-Ziv attack apply an exhaustive search to find the secret key that is used to initialize the LFSRs. The binary derivative and the Lempel-Ziv complexity measures are used to discriminate between correct and incorrect solutions, in order to identify the secret key. Both attacks are ideal for implementation on parallel processors. Experimental results show that the Lempel-Ziv correlation attack gives successful results for correlation levels of p = 0.482, requiring approximately 62000 ciphertext bits. And the binary derivative attack is successful for correlation levels of p = 0.47, using approximately 24500 ciphertext bits. The fast-correlation attack, utilizing the Viterbi algorithm, applies principles from convolutional coding theory, to identify an embedded low-rate convolutional code in the pn-sequence that is generated by an internal LFSR. The embedded convolutional code can then be decoded with a low complexity Viterbi algorithm. The algorithm operates in two phases: In the first phase a set of suitable parity check equations is found, based on the feedback taps of the LFSR, which has to be done once only once for a targeted system. In the second phase these parity check equations are utilized in a Viterbi decoding algorithm to recover the transmitted pn-sequence, thereby obtaining the secret initial state of the LFSR. Simulation results for a 19-bit LFSR show that this attack can recover the secret key for correlation levels of p = 0.485, requiring an average of only 153,448 ciphertext bits. All three attacks investigated in this dissertation are capable of attacking LFSRs with a length of approximately 40 bits. However, these attacks can be extended to attack much longer LFSRs by making use of a decimation attack. The decimation attack is able to reduce (decimate) the size of a targeted LFSR, and can be combined with any of the three above correlation attacks, to attack LFSRs with a length much longer than 40 bits. / Dissertation (MEng (Electronic Engineering))--University of Pretoria, 2007. / Electrical, Electronic and Computer Engineering / unrestricted
359

Power Line For Data Communication : Characterisation And Simulation

Yogesh, S 07 1900 (has links) (PDF)
No description available.
360

Learning with Complex Performance Measures : Theory, Algorithms and Applications

Narasimhan, Harikrishna January 2016 (has links) (PDF)
We consider supervised learning problems, where one is given objects with labels, and the goal is to learn a model that can make accurate predictions on new objects. These problems abound in applications, ranging from medical diagnosis to information retrieval to computer vision. Examples include binary or multiclass classi cation, where the goal is to learn a model that can classify objects into two or more categories (e.g. categorizing emails into spam or non-spam); bipartite ranking, where the goal is to learn a model that can rank relevant objects above the irrelevant ones (e.g. ranking documents by relevance to a query); class probability estimation (CPE), where the goal is to predict the probability of an object belonging to different categories (e.g. probability of an internet ad being clicked by a user). In each case, the accuracy of a model is evaluated in terms of a specified `performance measure'. While there has been much work on designing and analyzing algorithms for different supervised learning tasks, we have complete understanding only for settings where the performance measure of interest is the standard 0-1 or a loss-based classification measure. These performance measures have a simple additive structure, and can be expressed as an expectation of errors on individual examples. However, in many real-world applications, the performance measure used to evaluate a model is often more complex, and does not decompose into a sum or expectation of point-wise errors. These include the binary or multiclass G-mean used in class-imbalanced classification problems; the F1-measure and its multiclass variants popular in text retrieval; and the (partial) area under the ROC curve (AUC) and precision@ employed in ranking applications. How does one design efficient learning algorithms for such complex performance measures, and can these algorithms be shown to be statistically consistent, i.e. shown to converge in the limit of infinite data to the optimal model for the given measure? How does one develop efficient learning algorithms for complex measures in online/streaming settings where the training examples need to be processed one at a time? These are questions that we seek to address in this thesis. Firstly, we consider the bipartite ranking problem with the AUC and partial AUC performance measures. We start by understanding how bipartite ranking with AUC is related to the standard 0-1 binary classification and CPE tasks. It is known that a good binary CPE model can be used to obtain both a good binary classification model and a good bipartite ranking model (formally, in terms of regret transfer bounds), and that a binary classification model does not necessarily yield a CPE model. However, not much is known about other directions. We show that in a weaker sense (where the mapping needed to transform a model from one problem to another depends on the underlying probability distribution), a good bipartite ranking model for AUC can indeed be used to construct a good binary classification model, and also a good binary CPE model. Next, motivated by the increasing number of applications (e.g. biometrics, medical diagnosis, etc.), where performance is measured, not in terms of the full AUC, but in terms of the partial AUC between two false positive rates (FPRs), we design batch algorithms for optimizing partial AUC in any given FPR range. Our algorithms optimize structural support vector machine based surrogates, which unlike for the full AUC; do not admit a straightforward decomposition into simpler terms. We develop polynomial time cutting plane solvers for solving the optimization, and provide experiments to demonstrate the efficacy of our methods. We also present an application of our approach to predicting chemotherapy outcomes for cancer patients, with the aim of improving treatment decisions. Secondly, we develop algorithms for optimizing (surrogates for) complex performance mea-sures in the presence of streaming data. A well-known method for solving this problem for standard point-wise surrogates such as the hinge surrogate, is the stochastic gradient descent (SGD) method, which performs point-wise updates using unbiased gradient estimates. How-ever, this method cannot be applied to complex objectives, as here one can no longer obtain unbiased gradient estimates from a single point. We develop a general stochastic method for optimizing complex measures that avoids point-wise updates, and instead performs gradient updates on mini-batches of incoming points. The method is shown to provably converge for any performance measure that satis es a uniform convergence requirement, such as the partial AUC, precision@ and F1-measure, and in experiments, is often several orders of magnitude faster than the state-of-the-art batch methods, while achieving similar or better accuracies. Moreover, for specific complex binary classification measures, which are concave functions of the true positive rate (TPR) and true negative rate (TNR), we are able to develop stochastic (primal-dual) methods that can indeed be implemented with point-wise updates, by using an adaptive linearization scheme. These methods admit convergence rates that match the rate of the SGD method, and are again several times faster than the state-of-the-art methods. Finally, we look at the design of consistent algorithms for complex binary and multiclass measures. For binary measures, we consider the practically popular plug-in algorithm that constructs a classifier by applying an empirical threshold to a suitable class probability estimate, and provide a general methodology for proving consistency of these methods. We apply this technique to show consistency for the F1-measure, and under a continuity assumption on the distribution, for any performance measure that is monotonic in the TPR and TNR. For the case of multiclass measures, a simple plug-in method is no longer tractable, as in the place of a single threshold parameter, one needs to tune at least as many parameters as the number of classes. Using an optimization viewpoint, we provide a framework for designing learning algorithms for multiclass measures that are general functions of the confusion matrix, and as an instantiation, provide an e cient and provably consistent algorithm based on the bisection method for multiclass measures that are ratio-of-linear functions of the confusion matrix (e.g. micro F1). The algorithm outperforms the state-of-the-art SVMPerf method in terms of both accuracy and running time. Overall, in this thesis, we have looked at various aspects of complex performance measures used in supervised learning problems, leading to several new algorithms that are often significantly better than the state-of-the-art, to improved theoretical understanding of the performance measures studied, and to novel real-life applications of the algorithms developed.

Page generated in 0.0365 seconds