• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 66
  • 24
  • 7
  • 6
  • 5
  • 2
  • 2
  • 1
  • 1
  • 1
  • 1
  • 1
  • Tagged with
  • 139
  • 139
  • 54
  • 33
  • 26
  • 25
  • 24
  • 23
  • 22
  • 21
  • 19
  • 18
  • 18
  • 13
  • 13
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
1

A computer architecture for the implementation of SDL

Crutcher, L. A. January 1989 (has links)
No description available.
2

Slicing of extended finite state machines

Atchuta, Kaushik January 1900 (has links)
Master of Science / Department of Computing and Information Sciences / Torben Amtoft / An EFSM (Extended Finite State Machine) is a tuple (S, T, E, V) where S is a finite set of states, T is a finite set of transitions, E is a finite set of events, and V is a finite set of variables. Every transition t in T has a source state and a target state, both in S. There is a need to develop a GUI which aids in building such machines and simulating them so that a slicing algorithm can be implemented on such graphs. This was the main idea of Dr. Torben Amtoft, who has actually written the slicing algorithm and wanted this to be implemented in code. The project aims at implementing a GUI which is effective to simulate and build the graph with minimum user effort. Poor design often fails to attract users. So, the initial effort is to build a simple and effective GUI which serves the purpose of taking input from the user, building graphs and simulating it. The scope of this project is to build and implement an interface so that the users can do the following in an effective way:  Input a specification of an EFSM  Store and later retrieve EFSMs  Displaying an EFSM in a graphical form  Simulating the EFSM  Modify an EFSM  Implement the slicing algorithm All the above mentioned features must be integrated into the GUI and it should only fail if the input specification is wrong.
3

An investigation of distributed modelling and intelligent agent techniques for collaborative task support

Patel, Rakeshkumar January 2002 (has links)
No description available.
4

Algebraic Theory of Minimal Nondeterministic Finite Automata with Applications

Cazalis, Daniel S. 14 November 2007 (has links)
Since the 1950s, the theory of deterministic and nondeterministic finite automata (DFAs and NFAs, respectively) has been a cornerstone of theoretical computer science. In this dissertation, our main object of study is minimal NFAs. In contrast with minimal DFAs, minimal NFAs are computationally challenging: first, there can be more than one minimal NFA recognizing a given language; second, the problem of converting an NFA to a minimal equivalent NFA is NP-hard, even for NFAs over a unary alphabet. Our study is based on the development of two main theories, inductive bases and partials, which in combination form the foundation for an incremental algorithm, ibas, to find minimal NFAs. An inductive basis is a collection of languages with the property that it can generate (through union) each of the left quotients of its elements. We prove a fundamental characterization theorem which says that a language can be recognized by an n-state NFA if and only if it can be generated by an n-element inductive basis. A partial is an incompletely-specified language. We say that an NFA recognizes a partial if its language extends the partial, meaning that the NFA's behavior is unconstrained on unspecified strings; it follows that a minimal NFA for a partial is also minimal for its language. We therefore direct our attention to minimal NFAs recognizing a given partial. Combining inductive bases and partials, we generalize our characterization theorem, showing that a partial can be recognized by an n-state NFA if and only if it can be generated by an n-element partial inductive basis. We apply our theory to develop and implement ibas, an incremental algorithm that finds minimal partial inductive bases generating a given partial. In the case of unary languages, ibas can often find minimal NFAs of up to 10 states in about an hour of computing time; with brute-force search this would require many trillions of years.
5

FiniteFuzz : Finite State Machine Fuzzer For Industrial Control IoT Devices

Kaur, Jaskaran 03 July 2023 (has links)
Automated software testing techniques have become increasingly popular in recent years, with fuzzing being one of the most prevalent approaches. However, fuzzing Finite State Machines (FSMs) poses a significant challenge due to state and input dependency, resulting in exponential exploration time required to unlock the Finite State Machine. To address this issue, we present a novel approach in this research paper by introducing FINITEFUZZ, a Grey Box Fuzzer explicitly designed to fuzz Finite State Machines. Unlike the Blackbox fuzzers, FINITEFUZZ employs a mutational technique that utilizes feedback to steer the fuzzing process. FINITEFUZZ takes a random set of states and compares them with the desired FSM and records the states that increase the coverage of the Finite State Machine. The next seed incorporates the feedback received from all the previous seed inputs. This avoids exploring the same path multiple times and results in linear performance for all the types of Finite State machines possible. Our findings reveal that the use of FINITEFUZZ significantly reduces the exploration time required to uncover each state of the machine, making it a promising solution for generating Finite State Machines. We tested our FINITEFUZZ on 4 different types of Finite State Machines with each scenario resulting in at least 5X performance improvement in FSM generation. The potential applications of FSMs are vast, and our research suggests that the proposed approach can be used to generate any type of Finite State Machine. / Master of Science / Fuzzing, also known as Fuzz testing is a technique used to test software for security vulner- abilities, errors, and unexpected behavior. It involves generating random or semi-random input to a software application such as an operating system, or network service to test how it responds. Once input is generated, it is sent to the target application, which may crash, hang or produce unexpected results in response to the input. The results are then analyzed to identify potential vulnerabilities such as buffer overflows, input validation errors, and re- source leaks. Fuzzing is also used to test software that is difficult to test through other means, such as closed-source software or embedded systems. We generated a Fuzzer,FINITEFUZZ for Finite State Machine that unlocks the FSM starting from the random input and exploring only those seeds that increases the test coverage
6

Memory Cost of Quantum Contextuality

Harrysson, Patrik January 2016 (has links)
This is a study taking an information theoretic approach toward quantum contextuality. The approach is that of using the memory complexity of finite-state machines to quantify quantum contextuality. These machines simulate the outcome behaviour of sequential measurements on systems of quantum bits as predicted by quantum mechanics. Of interest is the question of whether or not classical representations by finite-state machines are able to effectively represent the state-independent contextual outcome behaviour. Here we consider spatial efficiency, rather than temporal efficiency as considered by D. Gottesman (1999), for the particular measurement dynamics in systems of quantum bits. Extensions of cases found in the adjacent study of Kleinmann et al. (2010) are established by which upper bounds on memory complexity for particular scenarios are found. Furthermore, an optimal machine structure for simulating any n-partite system of quantum bits is found, by which a lower bound for the memory complexity is found for each n in the natural numbers. Within this finite-state machine approach questions of foundational concerns on quantum mechanics were sought to be addressed. Alas, nothing of novel thought on such concerns is here reported on.
7

Content-aware Intra Prediction for H.264/AVC

Wu, Chia-shiu 05 September 2010 (has links)
This paper proposes new approaches to improve the coding performance of intra block coding in H.264/AVC via finite state machine and residual prediction. Grounding on high correlation between neighboring blocks, finite state machine is employed both at encoder and decoder to reduce the number of bits required for encoding to enhance coding performance. Two extra intra prediction modes are created in our proposed method. Through these two modes, the number of bits required to denote the current block is greatly reduced and low bit rate can be achieved. According to spatial correlation, intra-coded residual prediction reduces residual block by neighboring residual block. In this paper, we combine finite state machine with intra-coded residual prediction to achieve better coding performance. Experimental results show that the proposed method can greatly improve coding efficiency of intra macroblock coding in H.264/AVC.
8

Implementation of Embedded Mandarin Speech Recognition System in Travel Domain

Chen, Bo-han 07 September 2009 (has links)
We build a two-pass Mandarin Automatic Speech Recognition (ASR) decoder on mobile device (PDA). The first-pass recognizing base syllable is implemented by discrete Hidden Markov Model (HMM) with time-synchronous, tree-lexicon Viterbi search. The second-pass dealing with language model, pronunciation lexicon and N-best syllable hypotheses from first-pass is implemented by Weighted Finite State Transducer (WFST). The best word sequence is obtained by shortest path algorithms over the composition result. This system limits the application in travel domain and it decouples the application of acoustic model and the application of language model into independent recognition passes. We report the real-time recognition performance performed on ASUS P565 with a 800MHz processor, 128MB RAM running Microsoft Window Mobile 6 operating system. The 26-hour TCC-300 speech data is used to train 151 acoustic model. The 3-minute speech data recorded by reading the travel-domain transcriptions is used as the testing set for evaluating the performances (syllable, character accuracies) and real-time factors on PC and on PDA. The trained bi-gram model with 3500-word from BTEC corpus is used in second-pass. In the first-pass, the best syllable accuracy is 38.8% given 30-best syllable hypotheses using continuous HMM and 26-dimension feature. Under the above syllable hypotheses and acoustic model, we obtain 27.6% character accuracy on PC after the second-pass.
9

Improving fault coverage and minimising the cost of fault identification when testing from finite state machines

Guo, Qiang January 2006 (has links)
Software needs to be adequately tested in order to increase the confidence that the system being developed is reliable. However, testing is a complicated and expensive process. Formal specification based models such as finite state machines have been widely used in system modelling and testing. In this PhD thesis, we primarily investigate fault detection and identification when testing from finite state machines. The research in this thesis is mainly comprised of three topics - construction of multiple Unique Input/Output (UIO) sequences using Metaheuristic Optimisation Techniques (MOTs), the improved fault coverage by using robust Unique Input/Output Circuit (UIOC) sequences, and fault diagnosis when testing from finite state machines. In the studies of the construction of UIOs, a model is proposed where a fitness function is defined to guide the search for input sequences that are potentially UIOs. In the studies of the improved fault coverage, a new type of UIOCs is defined. Based upon the Rural Chinese Postman Algorithm (RCPA), a new approach is proposed for the construction of more robust test sequences. In the studies of fault diagnosis, heuristics are defined that attempt to lead to failures being observed in some shorter test sequences, which helps to reduce the cost of fault isolation and identification. The proposed approaches and techniques were evaluated with regard to a set of case studies, which provides experimental evidence for their efficacy.
10

Finite state automaton construction through regular expression hashing

Coetser, Rayner Johannes Lodewikus 25 August 2010 (has links)
In this study, the regular expressions forming abstract states in Brzozowski’s algorithm are not remapped to sequential state transition table addresses as would be the case in the classical approach, but are hashed to integers. Two regular expressions that are hashed to the same hash code are assigned the same integer address in the state transition table, reducing the number of states in the automaton. This reduction does not necessarily lead to the construction of a minimal automaton: no restrictions are placed on the hash function hashing two regular expressions to the same code. Depending on the quality of the hash function, a super-automaton, previously referred to as an approximate automaton, or an exact automaton can be constructed. When two regular expressions are hashed to the same state, and they do not represent the same regular language, a super-automaton is constructed. A super-automaton accepts the regular language of the input regular expression, in addition to some extra strings. If the hash function is bad, many regular expressions that do not represent the same regular language will be hashed together, resulting in a smaller automaton that accepts extra strings. In the ideal case, two regular expressions will only be hashed together when they represent the same regular language. In this case, an exact minimal automaton will be constructed. It is shown that, using the hashing approach, an exact or super-automaton is always constructed. Another outcome of the hashing approach is that a non-deterministic automaton may be constructed. A new version of the hashing version of Brzozowski’s algorithm is put forward which constructs a deterministic automaton. A method is also put forward for measuring the difference between an exact and a super-automaton: this takes the form of the k-equivalence measure: the k-equivalence measure measures the number of characters up to which the strings of two regular expressions are equal. The better the hash function, the higher the value of k, up to the point where the hash function results in regular expressions being hashed together if and only if they have the same regular language. Using the k-equivalence measure, eight generated hash functions and one hand coded hash function are evaluated for a large number of short regular expressions, which are generated using G¨odel numbers. The k-equivalence concept is extended to the average k-equivalence value in order to evaluate the hash functions for longer regular expressions. The hand coded hash function is found to produce good results. Copyright / Dissertation (MEng)--University of Pretoria, 2009. / Computer Science / unrestricted

Page generated in 0.0737 seconds