• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 2
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • Tagged with
  • 10
  • 3
  • 3
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
1

Indexing Large Permutations in Hardware

Odom, Jacob Henry 07 June 2019 (has links)
Generating unbiased permutations at run time has traditionally been accomplished through application specific optimized combinational logic and has been limited to very small permutations. For generating unbiased permutations of any larger size, variations of the memory dependent Fisher-Yates algorithm are known to be an optimal solution in software and have been relied on as a hardware solution even to this day. However, in hardware, this thesis proves Fisher-Yates to be a suboptimal solution. This thesis will show variations of Fisher-Yates to be suboptimal by proposing an alternate method that does not rely on memory and outperforms Fisher-Yates based permutation generators, while still able to scale to very large sized permutations. This thesis also proves that this proposed method is unbiased and requires a minimal input. Lastly, this thesis demonstrates a means to scale the proposed method to any sized permutations and also to produce optimal partial permutations. / Master of Science / In computing, some applications need the ability to shuffle or rearrange items based on run time information during their normal operations. A similar task is a partial shuffle where only an information dependent selection of the total items is returned in a shuffled order. Initially, there may be the assumption that these are trivial tasks. However, the applications that rely on this ability are typically related to security which requires repeatable, unbiased operations. These requirements quickly turn seemingly simple tasks to complex. Worse, often they are done incorrectly and only appear to meet these requirements, which has disastrous implications for security. A current and dominating method to shuffle items that meets these requirements was developed over fifty years ago and is based on an even older algorithm refer to as Fisher-Yates, after its original authors. Fisher-Yates based methods shuffle items in memory, which is seen as advantageous in software but only serves as a disadvantage in hardware since memory access is significantly slower than other operations. Additionally, when performing a partial shuffle, Fisher-Yates methods require the same resources as when performing a complete shuffle. This is due to the fact that, with Fisher-Yates methods, each element in a shuffle is dependent on all of the other elements. Alternate methods to meet these requirements are known but are only able to shuffle a very small number of items before they become too slow for practical use. To combat the disadvantages current methods of shuffling possess, this thesis proposes an alternate approach to performing shuffles. This alternate approach meets the previously stated requirements while outperforming current methods. This alternate approach is also able to be extended to shuffling any number of items while maintaining a useable level of performance. Further, unlike current popular shuffling methods, the proposed method has no inter-item dependency and thus offers great advantages over current popular methods with partial shuffles.
2

Sistemas de reescrita para grupos policíclicos / Rewriting systems for polycyclic groups

Santos, Laredo Rennan Pereira 25 February 2015 (has links)
Submitted by Luciana Ferreira (lucgeral@gmail.com) on 2015-05-19T15:51:20Z No. of bitstreams: 2 Dissertação - Laredo Rennan Pereira Santos - 2015.pdf: 970933 bytes, checksum: 6b8836c42db993ababe18805a5857373 (MD5) license_rdf: 23148 bytes, checksum: 9da0b6dfac957114c6a7714714b86306 (MD5) / Approved for entry into archive by Luciana Ferreira (lucgeral@gmail.com) on 2015-05-19T15:54:12Z (GMT) No. of bitstreams: 2 Dissertação - Laredo Rennan Pereira Santos - 2015.pdf: 970933 bytes, checksum: 6b8836c42db993ababe18805a5857373 (MD5) license_rdf: 23148 bytes, checksum: 9da0b6dfac957114c6a7714714b86306 (MD5) / Made available in DSpace on 2015-05-19T15:54:12Z (GMT). No. of bitstreams: 2 Dissertação - Laredo Rennan Pereira Santos - 2015.pdf: 970933 bytes, checksum: 6b8836c42db993ababe18805a5857373 (MD5) license_rdf: 23148 bytes, checksum: 9da0b6dfac957114c6a7714714b86306 (MD5) Previous issue date: 2015-02-25 / Coordenação de Aperfeiçoamento de Pessoal de Nível Superior - CAPES / In this work we consider monoid presentations MonhX,Ri, with set of generators X and relations R, defining groups and monoids as equivalence classes of words over X, in relation to a congruence generated by R. Taking R as a rewriting system with respect to an linear ordering of X , the set of words over X, we can apply some rewriting strategies in its laws. We use a version of the Knuth-Bendix method in R to find a confluent rewriting system equivalent to original, when such finite system exist. This new set of relations, denoted by RC(X, R), allows that in MonhX,Ri any element be defined by a unique irreducible word with respect to RC(X, R). We exhibit several examples of the execution of the Knuth-Bendix method from the functions of KBMAG package of the GAP system. Lastly, we set up a sufficient condition so that certain monoid presentations for polycyclic groups be confluent. / Neste trabalho, consideramos apresentações monoidais MonhX,Ri, com conjunto de geradores X e de relações R, definindo grupos e monoides como classes de equivalência de palavras sobre X, em relação a uma congruência gerada por R. Tomando R como um sistema de reescrita com respeito à uma ordenação linear de X , o conjunto de palavras sobre X, podemos aplicar algumas estratégias de reescrita em suas leis. Usamos uma versão do método de Knuth-Bendix em R para encontrar um sistema de reescrita confluente que seja equivalente ao original, quando um tal sistema finito existe. Este novo conjunto de relações, denotado por RC(X, R), permite que em MonhX,Ri qualquer elemento seja definido por uma única palavra irredutível com respeito a RC(X, R). Exibimos diversos exemplos da execução do método de Knuth-Bendix a partir das funções do pacote KBMAG do sistema GAP. Por fim, estabelecemos uma condição suficiente para que certas apresentações monoidais para grupos policíclicos sejam confluentes.
3

Grammar Rewriting

McAllester, David 01 December 1991 (has links)
We present a term rewriting procedure based on congruence closure that can be used with arbitrary equational theories. This procedure is motivated by the pragmatic need to prove equations in equational theories where confluence can not be achieved. The procedure uses context free grammars to represent equivalence classes of terms. The procedure rewrites grammars rather than terms and uses congruence closure to maintain certain congruence properties of the grammar. Grammars provide concise representations of large term sets. Infinite term sets can be represented with finite grammars and exponentially large term sets can be represented with linear sized grammars.
4

Grundlegende Textsuchalgorithmen / basics of text search algorithms

Reichelt, Stephan 01 July 2002 (has links) (PDF)
This document was created in addition to a recital for the seminar Pattern Matching at Chemnitz University of Technology in term 2001/2002. It is a description of basic text search approaches, which are the algorithms of Brute Force, Knuth-Morris-Pratt, Boyer-Moore and Boyer-Moore-Horspool. / Dieses Dokument entstand parallel zu einem Vortrag für das Proseminar Pattern Matching im Wintersemester 2001/2002 an der Technischen Universität Chemnitz. Es stellt die Funktionsweise der grundlegenden Suchalgorithmen Brute Force, Knuth-Morris-Pratt, Boyer-Moore sowie Boyer-Moore-Horspool dar.
5

Balancing compressed sequences

Pourtavakoli, Saamaan 23 December 2011 (has links)
The performance of communication and storage systems can be improved if the data being sent or stored has certain patterns and structure. In particular, some benefit if the frequency of the symbols is balanced. This includes magnetic and optical data storage devices, as well as future holographic storage systems. Significant research has been done to develop techniques and algorithms to adapt the data (in a reversible manner) to these systems. The goal has been to restructure the data to improve performance while keeping the complexity as low as possible. In this thesis, we consider balancing binary sequences and present its application in holographic storage systems. An overview is given of different approaches, as well as a survey of previous balancing methods. We show that common compression algorithms can be used for this purpose both alone and combined with other balancing algorithms. Simplified models are analyzed using information theory to determine the extent of the compression in this context. Simulation results using standard data are presented as well as theoretical analysis for the performance of the combination of compression with other balancing algorithms. / Graduate
6

Grands Réseaux Aléatoires: comportement asymptotique et points fixes

Draief, Moez 24 January 2005 (has links) (PDF)
Le théorème de Burke est un résultat classique en théorie des files d'attente. Il établit que le processus de départ d'une file M/M/1 est un processus de Poisson de même intensité que le processus des arrivées. Nous présentons des extensions de ce résultat à la file d'attente et au modèle de stockage. Nous abordons ensuite l'étude de ces systèmes en tandem et en régime transitoire. Nous prouvons que les équations qui régissent la dynamique des deux systèmes (file d'attente et modèle de stockage) sont les mêmes alors que les variables pertinentes sont différentes selon le modèle qui nous intéresse. En utilisant des analogies entre ces systèmes et l'algorithme de Robinson-Schensted-Knuth, nous donnons une preuve élégante de la propriété de symétrie de chacun des deux systèmes. Nous nous intéressons également aux corrélations entre les services des clients successifs au sein d'une période d'activité. Nous revenons par la suite au théorème de Burke que l'on peut voir comme étant un résultat de point fixe: le processus de Poisson est un point fixe pour la file d'attente avec des lois de service exponentielles. Nous prouvons des résultats de points fixes dans le cadre des grandes déviations où les variables d'entrée sont décrites par le biais de leurs fonctions de taux.
7

振舞等価性の証明のための等式付き書換えに基づく潜在帰納法

KUSAKARI, Keiichiro, SAKABE, Toshiki, NISHIDA, Naoki, SAKAI, Masahiko, SASADA, Yuji, 草刈, 圭一朗, 坂部, 俊樹, 西田, 直樹, 酒井, 正彦, 笹田, 悠司 07 1900 (has links)
No description available.
8

Grundlegende Textsuchalgorithmen

Reichelt, Stephan 01 July 2002 (has links)
This document was created in addition to a recital for the seminar Pattern Matching at Chemnitz University of Technology in term 2001/2002. It is a description of basic text search approaches, which are the algorithms of Brute Force, Knuth-Morris-Pratt, Boyer-Moore and Boyer-Moore-Horspool. / Dieses Dokument entstand parallel zu einem Vortrag für das Proseminar Pattern Matching im Wintersemester 2001/2002 an der Technischen Universität Chemnitz. Es stellt die Funktionsweise der grundlegenden Suchalgorithmen Brute Force, Knuth-Morris-Pratt, Boyer-Moore sowie Boyer-Moore-Horspool dar.
9

Study of plactic monoids by rewriting methods / Etude des monoïdes plaxiques par des méthodes de réécriture

Hage, Nohra 08 December 2016 (has links)
Cette thèse est consacrée à l’étude des monoïdes plaxiques par une nouvelle approche utilisant des méthodes issues de la réécriture. Ces méthodes sont appliquées à des présentations de monoïdes plaxiques décrites en termes de tableaux de Young, de bases cristallines de Kashiwara et de modèle des chemins de Littelmann. On étudie le problème des syzygies pour la présentation de Knuth des monoïdes plaxiques. En utilisant la procédure de complétion homotopique basée sur les procédures de complétion de Squier et de Knuth–Bendix, on construit des présentations cohérentes de monoïdes plaxiques de type A. Une telle présentation cohérente étend la notion de présentation convergente d’un monoïde par une famille génératrice de syzygies, décrivant toutes les relations entre les relations. On explicite une présentation cohérente finie des monoïdes plaxiques de type A avec les générateurs colonnes. Cependant, cette présentation n’est pas minimale dans le sens que plusieurs de ses générateurs sont superflus. En appliquant la procédure de réduction homotopique, on réduit cette présentation en une présentation cohérente finie qui étend la présentation de Knuth, donnantainsi toutes les syzygies des relations de Knuth. D’une manière plus générale, on étudie des présentations de monoïdes plaxiques généralisés du point de vue de la réécriture. On construit des présentations convergentes finies de ces monoïdes en utilisant les chemins de Littelmann. De plus, on étudie ces présentations pour le type C en termes de bases cristallines de Kashiwara. En introduisant les générateurs colonnes admissibles, on construit une présentation convergente finie du monoïde plaxique de type C avec des relations explicites. Cette approche nous permettrait d’étudier le problème des syzygies des présentations de monoïdes plaxiques en tout type / This thesis focuses on the study of plactic monoids by a new approach using methods issued from rewriting theory. These methods are applied on presentations of plactic monoids given in terms of Young tableaux, Kashiwara’s crystal bases and Littelmann path model. We study the syzygy problem for the Knuth presentation of the plactic monoids. Using the homotopical completion procedure that extends Squier’s and Knuth–Bendix’s completions procedure, we construct coherent presentations of plactic monoids of type A. Such a coherent presentation extends the notion of a presentation of a monoid by a family of generating syzygies, taking into account all the relations among the relations. We make explicit a finite coherent presentation of plactic monoids of type A with the column generators. However, this presentation is not minimal in the sense that many of its generators are superfluous. After applying the homotopical reduction procedure on this presentation, we reduce it to a finite coherent one that extends the Knuth presentation, giving then all the syzygies of the Knuth relations. More generally, we deal with presentations of plactic monoids of any type from the rewriting theory perspective. We construct finite convergent presentations for these monoids in a general way using Littelmann paths. Moreover, we study the latter presentations in terms of Kashiwara’s crystal graphs for type C. By introducing the admissible column generators, we obtain a finite convergent presentation of the plactic monoid of type C with explicit relations. This approach should allow us to study the syzygy problem for the presentations of plactic monoids for any type
10

Anomaly Detection in RFID Networks

Alkadi, Alaa 01 January 2017 (has links)
Available security standards for RFID networks (e.g. ISO/IEC 29167) are designed to secure individual tag-reader sessions and do not protect against active attacks that could also compromise the system as a whole (e.g. tag cloning or replay attacks). Proper traffic characterization models of the communication within an RFID network can lead to better understanding of operation under “normal” system state conditions and can consequently help identify security breaches not addressed by current standards. This study of RFID traffic characterization considers two piecewise-constant data smoothing techniques, namely Bayesian blocks and Knuth’s algorithms, over time-tagged events and compares them in the context of rate-based anomaly detection. This was accomplished using data from experimental RFID readings and comparing (1) the event counts versus time if using the smoothed curves versus empirical histograms of the raw data and (2) the threshold-dependent alert-rates based on inter-arrival times obtained if using the smoothed curves versus that of the raw data itself. Results indicate that both algorithms adequately model RFID traffic in which inter-event time statistics are stationary but that Bayesian blocks become superior for traffic in which such statistics experience abrupt changes.

Page generated in 0.0467 seconds