• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 8
  • 8
  • 2
  • 2
  • 2
  • 1
  • 1
  • Tagged with
  • 27
  • 27
  • 8
  • 8
  • 8
  • 7
  • 6
  • 6
  • 5
  • 5
  • 5
  • 4
  • 4
  • 4
  • 4
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
21

Caractérisation analytique et optimisation de codes source-canal conjoints / Analytical Characterization and Optimization of Joint Source-Channel Codes

Diallo, Amadou Tidiane 01 October 2012 (has links)
Les codes source-canal conjoints sont des codes réalisant simultanément une compression de données et une protection du train binaire généré par rapport à d’éventuelles erreurs de transmission. Ces codes sont non-linéaires, comme la plupart des codes de source. Leur intérêt potentiel est d’offrir de bonnes performances en termes de compression et de correction d’erreur pour des longueurs de codes réduites.La performance d’un code de source se mesure par la différence entre l’entropie de la source à compresser et le nombre moyen de bits nécessaire pour coder un symbole de cette source. La performance d’un code de canal se mesure par la distance minimale entre mots de codes ou entre suite de mots de codes, et plus généralement à l’aide du spectre des distances. Les codes classiques disposent d’outils pour évaluer efficacement ces critères de performance. Par ailleurs, la synthèse de bons codes de source ou de bons codes de canal est un domaine largement exploré depuis les travaux de Shannon. Par contre des outils analogues pour des codes source-canal conjoints, tant pour l’évaluation de performance que pour la synthèse de bons codes restaient à développer, même si certaines propositions ont déjà été faites dans le passé.Cette thèse s’intéresse à la famille des codes source-canal conjoints pouvant être décrits par des automates possédant un nombre fini d’états. Les codes quasi-arithmétiques correcteurs d’erreurs et les codes à longueurs variables correcteurs d’erreurs font partie de cette famille. La manière dont un automate peut être obtenu pour un code donné est rappelée.A partir d’un automate, il est possible de construire un graphe produit permettant de décrire toutes les paires de chemins divergeant d'un même état et convergeant vers un autre état. Nous avons montré que grâce à l’algorithme de Dijkstra, il est alors possible d’évaluer la distance libre d’un code conjoint avec une complexité polynomiale.Pour les codes à longueurs variables correcteurs d’erreurs, nous avons proposé des bornes supplémentaires, faciles à évaluer. Ces bornes constituent des extensions des bornes de Plotkin et de Heller aux codes à longueurs variables. Des bornes peuvent également être déduites du graphe produit associé à un code dont seule une partie des mots de codes a été spécifiée.Ces outils pour borner ou évaluer exactement la distance libre d’un code conjoint permettent de réaliser la synthèse de codes ayant des bonnes propriétés de distance pour une redondance donnée ou minimisant la redondance pour une distance libre donnée.Notre approche consiste à organiser la recherche de bons codes source-canal conjoints à l’aide d’arbres. La racine de l’arbre correspond à un code dont aucun bit n’est spécifié, les feuilles à des codes dont tous les bits sont spécifiés, et les nœuds intermédiaires à des codes partiellement spécifiés. Lors d’un déplacement de la racine vers les feuilles de l’arbre, les bornes supérieures sur la distance libre décroissent, tandis que les bornes inférieures croissent. Ceci permet d’appliquer un algorithme de type branch-and-prune pour trouver le code avec la plus grande distance libre, sans avoir à explorer tout l’arbre contenant les codes. L'approche proposée a permis la construction de codes conjoints pour les lettres de l'alphabet. Comparé à un schéma tandem équivalent (code de source suivi d'un code convolutif), les codes obtenus ont des performances comparables (taux de codage, distance libre) tout en étant moins complexes en termes de nombre d’état du décodeur.Plusieurs extensions de ces travaux sont en cours : 1) synthèse de codes à longueurs variables correcteurs d’erreurs formalisé comme un problème de programmation linéaire mixte sur les entiers ; 2) exploration à l’aide d’un algorithme de type A* de l’espace des codes de à longueurs variables correcteur d’erreurs. / Joint source-channel codes are codes simultaneously providing data compression and protection of the generated bitstream from transmission errors. These codes are non-linear, as most source codes. Their potential is to offer good performance in terms of compression and error-correction for reduced code lengths.The performance of a source code is measured by the difference between the entropy of the source to be compressed and the average number of bits needed to encode a symbol of this source. The performance of a channel code is measured by the minimum distance between codewords or sequences of codewords, and more generally with the distance spectrum. The classic codes have tools to effectively evaluate these performance criteria. Furthermore, the design of good source codes or good channel codes is a largely explored since the work of Shannon. But, similar tools for joint source-channel codes, for performances evaluation or for design good codes remained to develop, although some proposals have been made in the past.This thesis focuses on the family of joint source-channel codes that can be described by automata with a finite number of states. Error-correcting quasi-arithmetic codes and error-correcting variable-length codes are part of this family. The way to construct an automaton for a given code is recalled.From an automaton, it is possible to construct a product graph for describing all pairs of paths diverging from some state and converging to the same or another state. We have shown that, using Dijkstra's algorithm, it is possible to evaluate the free distance of a joint code with polynomial complexity. For errors-correcting variable-length codes, we proposed additional bounds that are easy to evaluate. These bounds are extensions of Plotkin and Heller bounds to variable-length codes. Bounds can also be deduced from the product graph associated to a code, in which only a part of code words is specified.These tools to accurately assess or bound the free distance of a joint code allow the design of codes with good distance properties for a given redundancy or minimizing redundancy for a given free distance. Our approach is to organize the search for good joint source-channel codes with trees. The root of the tree corresponds to a code in which no bit is specified, the leaves of codes in which all bits are specified, and the intermediate nodes to partially specified codes. When moving from the root to the leaves of the tree, the upper bound on the free distance decreases, while the lower bound grows. This allows application of an algorithm such as branch-and-prune for finding the code with the largest free distance, without having to explore the whole tree containing the codes.The proposed approach has allowed the construction of joint codes for the letters of the alphabet. Compared to an equivalent tandem scheme (source code followed by a convolutional code), the codes obtained have comparable performance (rate coding, free distance) while being less complex in terms of the number of states of the decoder. Several extensions of this work are in progress: 1) synthesis of error-correcting variable-length codes formalized as a mixed linear programming problem on integers, 2) Explore the search space of error-correcting variable-length codes using an algorithm such as A* algorithm.
22

Modelagem estocástica de sequências de disparos de um conjunto de neurônios / Stochastic modeling of spike trains of a set of neurons

Arias Rodriguez, Azrielex Andres 13 August 2013 (has links)
O presente trabalho constitui um primeiro esforço por modelar disparos de neurônios usando cadeias estocásticas de memória de alcance variável. Esses modelos foram introduzidos por Rissanen (1983). A ideia principal deste tipo de modelos consiste em que a definição probabilística de cada símbolo depende somente de uma porção finita do passado e o comprimento dela é função do passado mesmo, tal porção foi chamada de \"contexto\" e o conjunto de contextos pode ser representado através de uma árvore. No passado vários métodos de estimação foram propostos, nos quais é necessário especificar algumas constantes, de forma que Galves et al.(2012) apresentaram o \"critério do menor maximizador\" (SMC), sendo este um algoritmo consistente que independe de qualquer constante. De outro lado na área da neurociência vem tomando força a ideia de que o processamento de informação do cérebro é feito de forma probabilística, por esta razão foram usados os dados coletados por Sidarta Ribeiro e sua equipe, correspondentes à atividade neuronal em ratos, para estimar as árvores de contextos que caracterizam os disparos de quatro neurônios do hipocampo e identificar possíveis associações entre eles, também foram feitas comparações de acordo com o estado comportamental do rato (Vigília / Sono), em todos os casos foi usado o algoritmo SMC para a estimação das árvores de contexto. Por último, é aberta uma discussão sobre o tamanho de amostra necessário para a implementação deste tipo de análise. / This work describes an initial effort to model spike trains of neurons using Variable Length Markov Chains (VLMC). These models were introduced by Rissanen(1983). The principal idea of this kind of models is thaht the probabilistic definition of each symbol only depends on a finite part of the past and the length of this relevant portion is a function of the past itself. This portion were called \"context\" and the set of contexts can be represented as a rooted labeled tree. In the past, several methods of estimation were proposed, where is necessary to fix any constants, for this reason Galves et al.(2012) introduced the \"smallest maximizer criterion\" (SMC), which is a consistent and constant free model selection procedure. By the other side, in the neuroscience area has gained strength the idea that the information processing in the brain is done in a probabilistic way, for this reason were used the data collected by Sidarta Ribeiro and his team, related to the neuronal activity in rats, to estimate the context trees that describing the spike trains of four neurons of hipocampus region and to identify associations between them, comparisions were also made according to the behavioural state of the rat (Wake / Sleep), in all cases the algorithm were used the SMC algortithm to estimate the context trees. Finally, is opened a discussion on the sample size required for the implementation of this kind of analysis.
23

Modelagem estocástica de sequências de disparos de um conjunto de neurônios / Stochastic modeling of spike trains of a set of neurons

Azrielex Andres Arias Rodriguez 13 August 2013 (has links)
O presente trabalho constitui um primeiro esforço por modelar disparos de neurônios usando cadeias estocásticas de memória de alcance variável. Esses modelos foram introduzidos por Rissanen (1983). A ideia principal deste tipo de modelos consiste em que a definição probabilística de cada símbolo depende somente de uma porção finita do passado e o comprimento dela é função do passado mesmo, tal porção foi chamada de \"contexto\" e o conjunto de contextos pode ser representado através de uma árvore. No passado vários métodos de estimação foram propostos, nos quais é necessário especificar algumas constantes, de forma que Galves et al.(2012) apresentaram o \"critério do menor maximizador\" (SMC), sendo este um algoritmo consistente que independe de qualquer constante. De outro lado na área da neurociência vem tomando força a ideia de que o processamento de informação do cérebro é feito de forma probabilística, por esta razão foram usados os dados coletados por Sidarta Ribeiro e sua equipe, correspondentes à atividade neuronal em ratos, para estimar as árvores de contextos que caracterizam os disparos de quatro neurônios do hipocampo e identificar possíveis associações entre eles, também foram feitas comparações de acordo com o estado comportamental do rato (Vigília / Sono), em todos os casos foi usado o algoritmo SMC para a estimação das árvores de contexto. Por último, é aberta uma discussão sobre o tamanho de amostra necessário para a implementação deste tipo de análise. / This work describes an initial effort to model spike trains of neurons using Variable Length Markov Chains (VLMC). These models were introduced by Rissanen(1983). The principal idea of this kind of models is thaht the probabilistic definition of each symbol only depends on a finite part of the past and the length of this relevant portion is a function of the past itself. This portion were called \"context\" and the set of contexts can be represented as a rooted labeled tree. In the past, several methods of estimation were proposed, where is necessary to fix any constants, for this reason Galves et al.(2012) introduced the \"smallest maximizer criterion\" (SMC), which is a consistent and constant free model selection procedure. By the other side, in the neuroscience area has gained strength the idea that the information processing in the brain is done in a probabilistic way, for this reason were used the data collected by Sidarta Ribeiro and his team, related to the neuronal activity in rats, to estimate the context trees that describing the spike trains of four neurons of hipocampus region and to identify associations between them, comparisions were also made according to the behavioural state of the rat (Wake / Sleep), in all cases the algorithm were used the SMC algortithm to estimate the context trees. Finally, is opened a discussion on the sample size required for the implementation of this kind of analysis.
24

Comportement asymptotique des systèmes de fonctions itérées et applications aux chaines de Markov d'ordre variable / Asymptotic behaviour of iterated function systems and applications to variable length Markov chains

Dubarry, Blandine 14 June 2017 (has links)
L'objet de cette thèse est l'étude du comportement asymptotique des systèmes de fonctions itérées (IFS). Dans un premier chapitre, nous présenterons les notions liées à l'étude de tels systèmes et nous rappellerons différentes applications possibles des IFS telles que les marches aléatoires sur des graphes ou des pavages apériodiques, les systèmes dynamiques aléatoires, la classification de protéines ou encore les mesures quantiques répétées. Nous nous attarderons sur deux autres applications : les chaînes de Markov d'ordre infini et d'ordre variable. Nous donnerons aussi les principaux résultats de la littérature concernant l'étude des mesures invariantes pour des IFS ainsi que ceux pour le calcul de la dimension de Hausdorff. Le deuxième chapitre sera consacré à l'étude d'une classe d'IFS composés de contractions sur des intervalles réels fermés dont les images se chevauchent au plus en un point et telles que les probabilités de transition sont constantes par morceaux. Nous donnerons un critère pour l'existence et pour l'unicité d'une mesure invariante pour l'IFS ainsi que pour la stabilité asymptotique en termes de bornes sur les probabilités de transition. De plus, quand il existe une unique mesure invariante et sous quelques hypothèses techniques supplémentaires, on peut montrer que la mesure invariante admet une dimension de Hausdorff exacte qui est égale au rapport de l'entropie sur l'exposant de Lyapunov. Ce résultat étend la formule, établie dans la littérature pour des probabilités de transition continues, au cas considéré ici des probabilités de transition constantes par morceaux. Le dernier chapitre de cette thèse est, quant à lui, consacré à un cas particulier d'IFS : les chaînes de Markov de longueur variable (VLMC). On démontrera que sous une condition de non-nullité faible et de continuité pour la distance ultramétrique des probabilités de transitions, elles admettent une unique mesure invariante qui est attractive pour la convergence faible. / The purpose of this thesis is the study of the asymptotic behaviour of iterated function systems (IFS). In a first part, we will introduce the notions related to the study of such systems and we will remind different applications of IFS such as random walks on graphs or aperiodic tilings, random dynamical systems, proteins classification or else $q$-repeated measures. We will focus on two other applications : the chains of infinite order and the variable length Markov chains. We will give the main results in the literature concerning the study of invariant measures for IFS and those for the calculus of the Hausdorff dimension. The second part will be dedicated to the study of a class of iterated function systems (IFSs) with non-overlapping or just-touching contractions on closed real intervals and adapted piecewise constant transition probabilities. We give criteria for the existence and the uniqueness of an invariant probability measure for the IFSs and for the asymptotic stability of the system in terms of bounds of transition probabilities. Additionally, in case there exists a unique invariant measure and under some technical assumptions, we obtain its exact Hausdorff dimension as the ratio of the entropy over the Lyapunov exponent. This result extends the formula, established in the literature for continuous transition probabilities, to the case considered here of piecewise constant probabilities. The last part is dedicated to a special case of IFS : Variable Length Markov Chains (VLMC). We will show that under a weak non-nullness condition and continuity for the ultrametric distance of the transition probabilities, they admit a unique invariant measure which is attractive for the weak convergence.
25

The design of an electro-optic control interface for photonic packet switching applications with contention resolution capabilities

Van der Merwe, Jacobus Stefanus 05 November 2007 (has links)
The objective of the research is to design an electro-optic control for the Active Vertical Coupler-based Optical Cross-point Switch (OXS). The electronic control should be implemented on Printed Circuit Board (PCB) and therefore the design will include the PCB design as well. The aim of the electronic control board is to process the headers of the packets prior to entering the OXS to be switched and from the information in the headers, determine the state that the OXS should be configured in. It should then configure the optical cross-point accordingly. The electronic control board should show flexibility in the sense that it can handle different types of traffic and resolve possible contention that may occur. The research seeks to understand the problems associated with Photonic Packet Switching (PPS) networks. Two of the main problems identified in a PPS network are contention resolution and the lack of variable delays for storing optical packets. The OXS was analyzed and found to meet the requirements for future ultra-high speed PPS network technology with its high extinction ratio, wide optical bandwidth, ultra-fast switching speed and low crosstalk levels. Photonic packets were generated with 4-bit, 8-bit or 16-bit headers at a bit rate of 155 Mbit/s followed by a PRBS (Pseudo Random Bit Sequence) payload at 10 Gbit/s. Different scenarios were created with these types of packets and the electro-optic control and OXS were subjected to these scenarios with the aim of testing the flexibility of the electro-optic control to control the OXS. These scenarios include: <ul><li>Fixed length packets arriving synchronously at one input of the OXS. Some packets are destined for output 1, some are destined for output 2 and some are destined for output 3, therefore realizing a 1-to-3 optical switch.</li> <li>Eight variable length packets arriving synchronously at the OXS at one input, all of them destined for one output. The electro-optic control should open the switch cell for the correct amount of time.</li> <li>Three variable length packets arriving synchronously and asynchronously at one input of the OXS. Some packets are destined for output 1 while other packets are destined for output 2. The electro-optic control should open the correct switch cell for the correct amount of time.</li> <li>Two fixed length packets arriving at the OXS synchronously on different input ports at the same time, both destined for the same output port. The electro-optic control should detect the contention and switch the packets in such a way as to resolve the contention.</li> The electro-optic control and OXS managed to switch all these types of data traffic (scenarios) successfully and resolve the contention with an optical delay buffer. The success of the results was measured in two ways. Firstly it was deemed successful if the expected output sequence was measured at the corresponding output ports. Secondly it was successful if the degradation in quality of the packet was not drastic, meaning the output packets should have an BER (Bit Error Rate) of less than 10-9. The quality of the packets was measured in the form of eye diagrams before and after the switching and then compared. The research resulted in the design and implementation of a flexible electro-optic control for the OXS. The problem of contention was resolved for fixed length synchronous packets and a proposal is discussed to store packets for variable lengths of time by using the OXS. This electro-optic control has the potential to control the OXS for traffic with higher complexities and make the OXS compatible with future developments. / Dissertation (MEng (Electronic Engineering))--University of Pretoria, 2008. / Electrical, Electronic and Computer Engineering / MEng / unrestricted
26

Anomaly Detection and Security Deep Learning Methods Under Adversarial Situation

Miguel Villarreal-Vasquez (9034049) 27 June 2020 (has links)
<p>Advances in Artificial Intelligence (AI), or more precisely on Neural Networks (NNs), and fast processing technologies (e.g. Graphic Processing Units or GPUs) in recent years have positioned NNs as one of the main machine learning algorithms used to solved a diversity of problems in both academia and the industry. While they have been proved to be effective in solving many tasks, the lack of security guarantees and understanding of their internal processing disrupts their wide adoption in general and cybersecurity-related applications. In this dissertation, we present the findings of a comprehensive study aimed to enable the absorption of state-of-the-art NN algorithms in the development of enterprise solutions. Specifically, this dissertation focuses on (1) the development of defensive mechanisms to protect NNs against adversarial attacks and (2) application of NN models for anomaly detection in enterprise networks.</p><p>In this state of affairs, this work makes the following contributions. First, we performed a thorough study of the different adversarial attacks against NNs. We concentrate on the attacks referred to as trojan attacks and introduce a novel model hardening method that removes any trojan (i.e. misbehavior) inserted to the NN models at training time. We carefully evaluate our method and establish the correct metrics to test the efficiency of defensive methods against these types of attacks: (1) accuracy with benign data, (2) attack success rate, and (3) accuracy with adversarial data. Prior work evaluates their solutions using the first two metrics only, which do not suffice to guarantee robustness against untargeted attacks. Our method is compared with the state-of-the-art. The obtained results show our method outperforms it. Second, we proposed a novel approach to detect anomalies using LSTM-based models. Our method analyzes at runtime the event sequences generated by the Endpoint Detection and Response (EDR) system of a renowned security company running and efficiently detects uncommon patterns. The new detecting method is compared with the EDR system. The results show that our method achieves a higher detection rate. Finally, we present a Moving Target Defense technique that smartly reacts upon the detection of anomalies so as to also mitigate the detected attacks. The technique efficiently replaces the entire stack of virtual nodes, making ongoing attacks in the system ineffective.</p><p> </p>
27

自變數有測量誤差的羅吉斯迴歸模型之序貫設計探討及其在教育測驗上的應用 / Sequential Designs with Measurement Errors in Logistic Models with Applications to Educational Testing

盧宏益, Lu, Hung-Yi Unknown Date (has links)
本論文探討當自變數存在測量誤差時,羅吉斯迴歸模型的估計問題,並將此結果應用在電腦化適性測驗中的線上校準問題。在變動長度電腦化測驗的假設下,我們證明了估計量的強收斂性。試題反應理論被廣泛地使用在電腦化適性測驗上,其假設受試者在試題的表現情形與本身的能力,可以透過試題特徵曲線加以詮釋,羅吉斯迴歸模式是最常見的試題反應模式。藉由適性測驗的施行,考題的選取可以依據不同受試者,選擇最適合的題目。因此,相較於傳統測驗而言,在適性測驗中,題目的消耗量更為快速。在題庫的維護與管理上,新試題的補充與試題校準便為非常重要的工作。線上試題校準意指在線上測驗進行中,同時進行試題校準。因此,受試者的能力估計會存在測量誤差。從統計的觀點,線上校準面臨的困難,可以解釋為在非線性模型下,當自變數有測量誤差時的實驗設計問題。我們利用序貫設計降低測量誤差,得到更精確的估計,相較於傳統的試題校準,可以節省更多的時間及成本。我們利用處理測量誤差的技巧,進一步應用序貫設計的方法,處理在線上校準中,受試者能力存在測量誤差的問題。 / In this dissertation, we focus on the estimate in logistic regression models when the independent variables are subject to some measurement errors. The problem of this dissertation is motivated by online calibration in Computerized Adaptive Testing (CAT). We apply the measurement error model techniques and adaptive sequential design methodology to the online calibration problem of CAT. We prove that the estimates of item parameters are strongly consistent under the variable length CAT setup. In an adaptive testing scheme, examinees are presented with different sets of items chosen from a pre-calibrated item pool. Thus the speed of attrition in items will be very fast, and replenishing of item pool is essential for CAT. The online calibration scheme in CAT refers to estimating the item parameters of new, un-calibrated items by presenting them to examinees during the course of their ability testing together with previously calibrated items. Therefore, the estimated latent trait levels of examinees are used as the design points for estimating the parameter of the new items, and naturally these designs, the estimated latent trait levels, are subject to some estimating errors. Thus the problem of the online calibration under CAT setup can be formulated as a sequential estimation problem with measurement errors in the independent variables, which are also chosen sequentially. Item Response Theory (IRT) is the most commonly used psychometric model in CAT, and the logistic type models are the most popular models used in IRT based tests. That's why the nonlinear design problem and the nonlinear measurement error models are involved. Sequential design procedures proposed here can provide more accurate estimates of parameters, and are more efficient in terms of sample size (number of examinees used in calibration). In traditional calibration process in paper-and-pencil tests, we usually have to pay for the examinees joining the pre-test calibration process. In online calibration, there will be less cost, since we are able to assign new items to the examinees during the operational test. Therefore, the proposed procedures will be cost-effective as well as time-effective.

Page generated in 0.0745 seconds