• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 16
  • 8
  • 8
  • 7
  • 5
  • 1
  • Tagged with
  • 51
  • 51
  • 11
  • 11
  • 10
  • 9
  • 8
  • 8
  • 8
  • 6
  • 6
  • 6
  • 5
  • 5
  • 5
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
1

A SURVEY OF LIMITED NONDETERMINISM IN COMPUTATIONAL COMPLEXITY THEORY

Levy, Matthew Asher 01 January 2003 (has links)
Nondeterminism is typically used as an inherent part of the computational models used incomputational complexity. However, much work has been done looking at nondeterminism asa separate resource added to deterministic machines. This survey examines several differentapproaches to limiting the amount of nondeterminism, including Kintala and Fischer's βhierarchy, and Cai and Chen's guess-and-check model.
2

Aplicação de autômatos finitos nebulosos no reconhecimento aproximado de cadeias. / The approximate string matching using fuzzy finite automata.

Alexandre Maciel 02 June 2006 (has links)
O reconhecimento aproximado de cadeias de texto é um problema recorrente em diversas aplicações onde o computador é utilizado como meio de processamento de uma massa de dados sujeita a imprecisões, erros e distorções. Existem inúmeras metodologias, técnicas e métricas criadas e empregadas na resolução deste tipo de problema, mas a maioria delas é inflexível em pelo menos um dos seguintes pontos: arquitetura, métrica utilizada para aferir o erro encontrado ou especificidade na aplicação. Esse trabalho propõe e analisa a utilização dos Autômatos Finitos Nebulosos para a resolução desse tipo de problema. A teoria nebulosa oferece uma base teórica sólida para o tratamento de informações inexatas ou sujeita a erros, enquanto o modelo matemático dos autômatos finitos é uma ferramenta consolidada para o problema de reconhecimento de cadeias de texto. Um modelo híbrido não só oferece uma solução flexível para a resolução do problema proposto, como serve de base para a resolução de inúmeros outros problemas que dependem do tratamento de informações imprecisas. / The approximate string matching problem is recurring in many applications where computer is used to process imprecise, fuzzy or spurious data. An uncountable number of methods, techniques and metrics to solve this class of problem are available, but many of them are inflexible at least in one of following: architecture, metric or application specifics. This work proposes and analyzes the use of Fuzzy Finite State Automata to solve this class of problems. The fuzzy theory grants a solid base to handle imprecise or fuzzy information; the finite state automata is a classic tool in string matching problems. A hybrid model offers a flexible solution for this class of problem and can be a base for other problems related with imprecise data processing.
3

Aplicação de autômatos finitos nebulosos no reconhecimento aproximado de cadeias. / The approximate string matching using fuzzy finite automata.

Maciel, Alexandre 02 June 2006 (has links)
O reconhecimento aproximado de cadeias de texto é um problema recorrente em diversas aplicações onde o computador é utilizado como meio de processamento de uma massa de dados sujeita a imprecisões, erros e distorções. Existem inúmeras metodologias, técnicas e métricas criadas e empregadas na resolução deste tipo de problema, mas a maioria delas é inflexível em pelo menos um dos seguintes pontos: arquitetura, métrica utilizada para aferir o erro encontrado ou especificidade na aplicação. Esse trabalho propõe e analisa a utilização dos Autômatos Finitos Nebulosos para a resolução desse tipo de problema. A teoria nebulosa oferece uma base teórica sólida para o tratamento de informações inexatas ou sujeita a erros, enquanto o modelo matemático dos autômatos finitos é uma ferramenta consolidada para o problema de reconhecimento de cadeias de texto. Um modelo híbrido não só oferece uma solução flexível para a resolução do problema proposto, como serve de base para a resolução de inúmeros outros problemas que dependem do tratamento de informações imprecisas. / The approximate string matching problem is recurring in many applications where computer is used to process imprecise, fuzzy or spurious data. An uncountable number of methods, techniques and metrics to solve this class of problem are available, but many of them are inflexible at least in one of following: architecture, metric or application specifics. This work proposes and analyzes the use of Fuzzy Finite State Automata to solve this class of problems. The fuzzy theory grants a solid base to handle imprecise or fuzzy information; the finite state automata is a classic tool in string matching problems. A hybrid model offers a flexible solution for this class of problem and can be a base for other problems related with imprecise data processing.
4

A Unified Model of Pattern-Matching Circuits for Field-Programmable Gate Arrays

Clark, Christopher R. 28 August 2006 (has links)
The objective of this dissertation is to develop a methodology for describing the functionality, analyzing the complexity, and evaluating the performance of a large class of pattern-matching circuit design approaches for field-programmable gate arrays (FPGAs). The developed methodology consists of three elements. The first is a functional model and associated nomenclature that unifies a significant portion of published circuit design approaches while also illuminating many novel approaches. The second is a set of analytical expressions that model the area and time complexity of each circuit design approach based on attributes of a given pattern set. Third, software tools are developed that facilitate architectural design space exploration and circuit implementation. This methodology is used to conduct an extensive evaluation and comparison of design approaches under a wide range of conditions using pattern sets from multiple application domains as well as synthetic pattern sets. The results indicate strong dependences between pattern set properties and circuit performance and provide new insights on the fundamental nature of various design approaches. A number of techniques have been proposed for designing pattern-matching hardware circuits with reconfigurable FPGA chips. The use of FPGAs enables high performance because the circuits can be customized for a particular application and pattern set. A relatively unstudied consequence of tailoring circuits for specific patterns is that circuit area and performance are affected by various properties of the patterns used. Most previous work in this field only considers a single design approach and a small number of pattern sets. Therefore, it is not clear how each design is affected by pattern set properties. For a given set of patterns, it is difficult to determine which approach would be the most efficient or provide the highest performance. Previous attempts to compare approaches using results from different publications are conflicting and inconclusive due to variations in the FPGA devices, patterns, and circuit optimizations used. There has been no attempt to evaluate a wide range of designs under a common set of conditions. The methodology presented in this dissertation provides a framework for studying multiple aspects of FPGA pattern-matching circuits in a controlled and consistent manner.
5

Towards cache optimization in finite automata implementations

Ketcha Ngassam, Ernest 21 July 2007 (has links)
To the best of our knowledge, the only available implementations of FA-based string recognizers are the so-called conventional table-driven algorithm and, of course, its hardcoded counterpart suggested by Thompson, Penello, and DeRemer in 1967, 1986, and 2004 respectively. However, our early experiments have shown that the performance of both implementations is hampered by the random access nature of the automaton’s transition table in the case of table-driven, and also the random access nature of the directly executable instructions that make up each hardcoded state. Moreover, the problem of memory load and instruction load are also performance bottlenecks of these algorithms, since, as the automaton size grows, more space in memory is required to hold data/instructions relevant to the states. This thesis exploits the notion of cache optimization (that requires good data or instructions organization) in investigating various enhancements of both table-driven and hardcoding. Functions have been used to formally define the denotational semantics of string recognizers. These functions rely on various so-called strategy variables that are integrated into the formal definition of each recognizer. By appropriately selecting these variables, the conventional algorithms may be described, without loss of generality. By specializing these strategy variables, the new and enhanced recognizers can be denotationally described, and resulting algorithms can then be implemented. We first introduce the so-called Dynamic State Allocation (DSA) strategy regarded as a sort of Just-In-time (JIT) implementation of FA-based string recognizers whereby a predefined portion of the memory is reserved for acceptance testing. Then follows the State pre-Ordering (SpO) strategy that assumes some prior knowledge on the order in which states would be visited. In this case, acceptance testing takes place once each state have been allocated to its new position in memory. The last strategy referred to as the Allocated Virtual Caching (AVC) strategy is based on the premise that a portion of the memory originally occupied by the automaton’s states is virtually used as a sort of cache memory in which acceptance testing takes place, enabling therefore, the exploitation of the various performance enhancement notions on which hardware cache memory relies. It is shown that the algorithms can be classified in a taxonomy tree which is further mapped into a class-diagram that represents the design of a toolkit for FA-based string recognition. Also given in the thesis are empirical results that indicate that the algorithms suggested can, in general, outperform their conventional counterparts when recognizing large and appropriately chosen input strings. / Thesis (PhD (Computer Science))--University of Pretoria, 2007. / Computer Science / PhD / unrestricted
6

Flexible finite automata-based algorithms for detecting microsatellites in DNA

De Ridder, Corne 17 August 2010 (has links)
Apart from contributing to Computer Science, this research also contributes to Bioinformatics, a subset of the subject discipline Computational Biology. The main focus of this dissertation is the development of a data-analytical and theoretical algorithm to contribute to the analysis of DNA, and in particular, to detect microsatellites. Microsatellites, considered in the context of this dissertation, refer to consecutive patterns contained by genomic sequences. A perfect tandem repeat is defined as a string of nucleotides which is repeated at least twice in a sequence. An approximate tandem repeat is a string of nucleotides repeated consecutively at least twice, with small differences between the instances. The research presented in this dissertation was inspired by molecular biologists who were discovered to be visually scanning genetic sequences in search of short approximate tandem repeats or so called microsatellites. The aim of this dissertation is to present three algorithms that search for short approximate tandem repeats. The algorithms comprise the implementation of finite automata. Thus the hypothesis posed is as follows: Finite automata can detect microsatellites effectively in DNA. "Effectively" includes the ability to fine-tune the detection process so that redundant data is avoided, and relevant data is not missed during search. In order to verify whether the hypothesis holds, three theoretical related algorithms have been proposed based on theorems from finite automaton theory. They are generically referred to as the FireìSat algorithms. These algorithms have been implemented, and the performance of FireìSat2 has been investigated and compared to other software packages. From the results obtained, it is clear that the performance of these algorithms differ in terms of attributes such as speed, memory consumption and extensibility. In respect of speed performance, FireìSat outperformed rival software packages. It will be seen that the FireìSat algorithms have several parameters that can be used to tune their search. It should be emphasized that these parameters have been devised in consultation with the intended user community, in order to enhance the usability of the software. It was found that the parameters of FireìSat can be set to detect more tandem repeats than rival software packages, but also tuned to limit the number of detected tandem repeats. Copyright / Dissertation (MSc)--University of Pretoria, 2010. / Computer Science / unrestricted
7

Process-based decomposition and multicore performance : case studies from Stringology

Strauss, Marthinus David January 2017 (has links)
Current computing hardware supports parallelism at various levels. Conventional programming techniques, however, do not utilise efficiently this growing resource. This thesis seeks a better fit between software and current hardware while following a hardware-agnostic software development approach. This allows the programmer to remain focussed on the problem domain. The thesis proposes process-based problem decomposition as a natural way to structure a concurrent implementation that may also improve multicore utilisation and, consequently, run-time performance. The thesis presents four algorithms as case studies from the domain of string pattern matching and finite automata. Each case study is conducted in the following manner. The particular sequential algorithm is decomposed into a number of communicating concurrent processes. This decomposition is described in the process algebra CSP. Hoare's CSP was chosen as one of the best known process algebras, for its expressive power, conciseness, and overall simplicity. Once the CSP-based process description has brought ideas to a certain level of maturity, the description is translated into a process-based implementation. The Go programming language was used for the implementation as its concurrency features were inspired by CSP. The performance of the process-based implementation is then compared against its conventional sequential version (also provided in Go). The goal is not to achieve maximal performance, but to compare the run-time performance of an ``ordinary'' programming effort that focussed on a process-based solution over a conventional sequential implementation. Although some implementations did not perform as well as others, some did significantly outperform their sequential counterparts. The thesis thus provides prima facie evidence that a process-based decomposition approach is promising for achieving a better fit between software and current multicore hardware. / Thesis (PhD)--University of Pretoria, 2017. / Computer Science / PhD / Unrestricted
8

Constructing minimal acyclic deterministic finite automata

Watson, Bruce William 30 March 2011 (has links)
This thesis is submitted in partial fulfillment of the requirements for the degree of Doctor of Philosophy (Ph.D) in the FASTAR group of the Department of Computer Science, University of Pretoria, South Africa. I present a number of algorithms for constructing minimal acyclic deterministic finite automata (MADFAs), most of which I originally derived/designed or co-discovered. Being acyclic, such automata represent finite languages and have proven useful in applications such as spellchecking, virus-searching and text indexing. In many of those applications, the automata grow to billions of states, making them difficult to store without using various compression techniques — the most important of which is minimization. Results from the late 1950’s show that minimization yields a unique automaton (for a given language), and later results show that minimization of acyclic automata is possible in time linear in the number of states. These two results make for a rich area of algorithmics research; automata and algorithmics research are relatively old fields of computing science and the discovery/invention of new algorithms in the field is an exciting result. I present both incremental and nonincremental algorithms. With nonincremental techniques, the unminimized acyclic deterministic finite automaton (ADFA) is first constructed and then minimized. As mentioned above, the unminimized ADFA can be very large indeed — often even too large to fit within the virtual memory space of the computer. As a result, incremental techniques for minimization (i.e. the ADFA is minimized during its construction) become interesting. Incremental algorithms frequently have some overhead: if the unminimized ADFA fits easily within physical memory, it may still be faster to use nonincremental techniques. The presentation used in this thesis has a few unusual characteristics: <ul><li> Few other presentations follow a correctness-by-construction style for presenting and deriving algorithms. The presentations given here include correctness arguments or sketches thereof. </li><li> The presentation is taxonomic — emphasizing the similarities and differences between the algorithms at a fundamental level. </li><li> While it is possible to present these algorithms in a formal-language-theoretic setting, this thesis remains somewhat closer to the actual implementation issues. </li><li> In several chapters, new algorithms and interesting new variants of existing algorithms are presented. </li><li> It gives new presentations of many existing algorithms — all in a common format with common examples. </li><li> There are extensive links to the existing literature. </li></ul> / Thesis (PhD)--University of Pretoria, 2010. / Computer Science / unrestricted
9

Implementace obecného zpětného assembleru / Implementation of General Disassembler

Přikryl, Zdeněk January 2007 (has links)
This thesis presents the process of creating disassembler for new designed processors. We demand automatic generation of the disassembler. Instruction set for processor is modeled by specialized language ISAC, which offers resources for description of the instruction set. For example it describes format of instruction in the assembly language or format of instruction in the binary form or behavior of this instruction. Internal model is coupled finite automata, which describes relation of textual form of the instruction and binary form of the instruction in formal way. The code of disassembler is generated from the internal model. This disassembler accepts program in binary code at the input and generate equivalent program in assembly language at the output.
10

Synchronizace, barvení cesty a skoky v konečných automatech / Synchronization, Road Coloring, and Jumps in Finite Automata

Vorel, Vojtěch January 2015 (has links)
Multiple original results in the theory of automata and formal languages are presented, dealing mainly with combinatorial problems and complexity questions related to reset words and road coloring. The other results concern jumping finite automata and related types of rewriting systems. Powered by TCPDF (www.tcpdf.org)

Page generated in 0.0723 seconds