Spelling suggestions: "subject:"finitestate"" "subject:"infinitestate""
31 |
An Approach to Diagnosability Analysis for Interacting Finite State SystemsLawesson, Dan January 2005 (has links)
Fault isolation is the process of reasoning required to find the cause of a system failure. In a model-based approach, the available information is a model of the system and some observations. Using knowledge of how the system generally behaves, as given in the system model, together with partial observations of the events of the current situation the task is to deduce the failure causing event(s). In our setting, the observable events manifest themselves in a message log. We study post mortem fault isolation for moderately concurrent discrete event systems where the temporal order of logged messages contains little information. To carry out fault isolation one has to study the correlation between observed events and fault events of the system. In general, such study calls for exploration of the state space of the system, which is exponential in the number of system components. Since we are studying a restricted class of all possible systems we may apply aggressive specialized abstraction policies in order to allow fault isolation without ever considering the often intractably large state space of the system. In this thesis we describe a mathematical framework as well as a prototype implementation and an experimental evaluation of such abstraction techniques. The method is efficient enough to allow for not only post mortem fault isolation but also design time diagnosability analysis of the system, which can be seen as a non-trivial way of analyzing all possible observations of the system versus the corresponding fault isolation outcome. This work has been supported by VINNOVA’s Competence Center ISIS.
|
32 |
Efficient Methods for Automatic Speech RecognitionSeward, Alexander January 2003 (has links)
This thesis presents work in the area of automatic speech recognition (ASR). The thesis focuses on methods for increasing the efficiency of speech recognition systems and on techniques for efficient representation of different types of knowledge in the decoding process. In this work, several decoding algorithms and recognition systems have been developed, aimed at various recognition tasks. The thesis presents the KTH large vocabulary speech recognition system. The system was developed for online (live) recognition with large vocabularies and complex language models. The system utilizes weighted transducer theory for efficient representation of different knowledge sources, with the purpose of optimizing the recognition process. A search algorithm for efficient processing of hidden Markov models (HMMs) is presented. The algorithm is an alternative to the classical Viterbi algorithm for fast computation of shortest paths in HMMs. It is part of a larger decoding strategy aimed at reducing the overall computational complexity in ASR. In this approach, all HMM computations are completely decoupled from the rest of the decoding process. This enables the use of larger vocabularies and more complex language models without an increase of HMM-related computations. Ace is another speech recognition system developed within this work. It is a platform aimed at facilitating the development of speech recognizers and new decoding methods. A real-time system for low-latency online speech transcription is also presented. The system was developed within a project with the goal of improving the possibilities for hard-of-hearing people to use conventional telephony by providing speech-synchronized multimodal feedback. This work addresses several additional requirements implied by this special recognition task. / QC 20100811
|
33 |
Multilevel Gain Cell Arrays for Fault-Tolerant VLSI SystemsKhalid, Muhammad Umer January 2011 (has links)
Embedded memories dominate area, power and cost of modern very large scale integrated circuits system on chips ( VLSI SoCs). Furthermore, due to process variations, it becomes challenging to design reliable energy efficient systems. Therefore, fault-tolerant designs will be area efficient, cost effective and have low power consumption. The idea of this project is to design embedded memories where reliability is intentionally compromised to increase storage density. Gain cell memories are smaller than SRAM and unlike DRAM they are logic compatible. In multilevel DRAM storage density is increased by storing two bits per cell without reducing feature size. This thesis targets multilevel read and write schemes that provide short access time, small area overhead and are highly reliable. First, timing analysis of reference design is performed for read and write operation. An analytical model of write bit line (WBL) is developed to have an estimate of write delay. Replica technique is designed to generate the delay and track variations of storage array. Design of replica technique is accomplished by designing replica column, read and write control circuits. A memory controller is designed to control the read and write operation in multilevel DRAM. A multilevel DRAM is with storage capacity of eight kilobits is designed in UMC 90 nm technology. Simulations are performed for testing and results are reported for energy and access time. Monte Carlo analysis is done for variation tolerance of replica technique. Finally, multilevel DRAM with replica technique is compared with reference design to check the improvement in access times.
|
34 |
A Novel Method For Watermarking Sequential CircuitsLewandowski, Matthew 01 January 2013 (has links)
We present an Intellectual Property (IP) protection technique for sequential circuits driven by embedding a decomposed signature into a Finite State Machine (FSM) through the manipulation of the arbitrary state encoding of the unprotected FSM. This technique is composed of three steps: (a) transforming the signature into a watermark graph, (b) embedding watermark graphs into the original FSM's State Transition Graph (STG) and (c) generating models for verification and extraction. In the watermark construction process watermark graphs are generated from signatures. The proposed methods for watermark construction are: (1) BSD, (2) FSD, and (3) HSD. The HSD method is shown to be advantageous for all signatures while providing sparse watermark FSMs with complexity O(n^2). The embedding process is related to the sub-graph matching problem. Due to the computational complexity of the matching problem, attempts to reverse engineer or remove the constructed watermark from the protected FSM, with only finite resources and time, are shown to be infeasible. The proposed embedding solutions are: (1) Brute Force and (2) Greedy Heuristic. The greedy heuristic has a computational complexity of O(n log n), where n is the number of states in the watermark graph. The greedy heuristic showed improvements for three of the six encoding schemes used in experimental results. Model generation and verification utilizes design automation techniques for generating multiple representations of the original, watermark, and watermarked FSMs. Analysis of the security provided by this method shows that a variety of attacks on the watermark and system including: (1) data-mining hidden functionality, (2) preimage, (3) secondary preimage, and (4) collision, can be shown to be computationally infeasible. Experimental results for the ten largest IWLS 93 benchmarks that the proposed watermarking technique is a secure, yet flexible, technique for protecting sequential circuit based IP cores.
|
35 |
Approximate Sub-Graph Isomorphism For Watermarking Finite State Machine HardwareMeana, Richard William Piper 01 January 2013 (has links)
We present a method of mitigating theft of sequential circuit Intellectual Property hardware designs through means of watermarking. Hardware watermarking can be performed by selectively embedding a watermark in the state encoding of the Finite State Machine. This form of watermarking can be achieved by matching a directed graph representation of the watermark with a sub-graph in state transition graph representation of the FSM. We experiment with three approaches: a brute force method that provides a proof of concept, a greedy algorithm that provides excellent runtime with a drawback of sub-optimal results, and finally a simulated annealing method that provides near optimal solutions with runtimes that meet our performance goals. The simulated annealing approach when applied on a ten benchmarks chosen from IWLS 93 benchmark suite, provides watermarking results with edge overhead of less than 6% on average with runtimes not exceeding five minutes.
|
36 |
Fault tolerance in distributed systems : a coding-theoretic approachBalasubramanian, Bharath 19 November 2012 (has links)
Distributed systems are rapidly increasing in importance due to the need for scalable computations on huge volumes of data. This fact is reflected in many real-world distributed applications such as Amazon's EC2 cloud computing service, Facebook's Cassandra key-value store or Apache's Hadoop MapReduce framework. Multi-core architectures developed by companies such as Intel and AMD have further brought this to prominence, since workloads can now be distributed across many individual cores. The nodes or entities in such systems are often built using commodity hardware and are prone to physical failures and security vulnerabilities. Achieving fault tolerance in such systems is a challenging task, since it is not easy to observe and control these distributed entities. Replication is a standard approach for fault tolerance in distributed systems. The main advantage of this approach is that the backups incur very little overhead in terms of the time taken for normal operation or recovery. However, replication is grossly wasteful in terms of the number of backups required for fault tolerance. The large number of backups has two major implications. First, the total space or memory required for fault tolerance is considerably high. Second, there is a significant cost of resources such as the power required to run the backup processes. Given the large number of distributed servers employed in real-world applications, it is a hard task to provide fault tolerance while achieving both space and operational efficiency. In the world of data fault tolerance and communication, coding theory is used as the space efficient alternate for replication. A direct application of coding theory to distributed servers, treating the servers as blocks of data, is very inefficient in terms of the updates to the backups. This is primarily because each update to the server will affect many blocks in memory, all of which have to be re-encoded at the backups. This leads us to the following thesis statement: Can we design a mechanism for fault tolerance in distributed systems that combines the space efficiency of coding theory with the low operational overhead of replication? We present a new paradigm to solve this problem, broadly referred to as fusion. We provide fusion-based solutions for two models of computation that are representative of a large class of applications: (i) Systems modeled as deterministic finite state machines and, (ii) Systems modeled as programs containing data structures. For finite state machines, we use the notion of Hamming distances to present a polynomial time algorithm to generate efficient backup state machines. For programs hosting data structures, we use a combination of erasure codes and selective replication to generate efficient backups for most commonly used data structures such as queues, array lists, linked lists, vectors and maps. We present theoretical and experimental results that demonstrate the efficiency of our schemes over replication. Finally, we use our schemes to design an efficient solution for fault tolerance in two real-world applications: Amazons Dynamo key-value store, and Google's MapReduce framework. / text
|
37 |
Towards a Brain-inspired Information Processing System: Modelling and Analysis of Synaptic DynamicsEl-Laithy, Karim 12 January 2012 (has links) (PDF)
Biological neural systems (BNS) in general and the central nervous system (CNS) specifically
exhibit a strikingly efficient computational power along with an extreme flexible and adaptive basis
for acquiring and integrating new knowledge. Acquiring more insights into the actual mechanisms
of information processing within the BNS and their computational capabilities is a core objective
of modern computer science, computational sciences and neuroscience. Among the main reasons
of this tendency to understand the brain is to help in improving the quality of life of people suffer
from loss (either partial or complete) of brain or spinal cord functions. Brain-computer-interfaces
(BCI), neural prostheses and other similar approaches are potential solutions either to help these
patients through therapy or to push the progress in rehabilitation. There is however a significant
lack of knowledge regarding the basic information processing within the CNS. Without a better
understanding of the fundamental operations or sequences leading to cognitive abilities, applications
like BCI or neural prostheses will keep struggling to find a proper and systematic way to
help patients in this regard. In order to have more insights into these basic information processing
methods, this thesis presents an approach that makes a formal distinction between the essence
of being intelligent (as for the brain) and the classical class of artificial intelligence, e.g. with
expert systems. This approach investigates the underlying mechanisms allowing the CNS to be
capable of performing a massive amount of computational tasks with a sustainable efficiency and
flexibility. This is the essence of being intelligent, i.e. being able to learn, adapt and to invent.
The approach used in the thesis at hands is based on the hypothesis that the brain or specifically a
biological neural circuitry in the CNS is a dynamic system (network) that features emergent capabilities.
These capabilities can be imported into spiking neural networks (SNN) by emulating the
dynamic neural system. Emulating the dynamic system requires simulating both the inner workings
of the system and the framework of performing the information processing tasks. Thus, this
work comprises two main parts. The first part is concerned with introducing a proper and a novel
dynamic synaptic model as a vital constitute of the inner workings of the dynamic neural system.
This model represents a balanced integration between the needed biophysical details and being
computationally inexpensive. Being a biophysical model is important to allow for the abilities of
the target dynamic system to be inherited, and being simple is needed to allow for further implementation
in large scale simulations and for hardware implementation in the future. Besides, the
energy related aspects of synaptic dynamics are studied and linked to the behaviour of the networks
seeking for stable states of activities. The second part of the thesis is consequently concerned with
importing the processing framework of the dynamic system into the environment of SNN. This
part of the study investigates the well established concept of binding by synchrony to solve the information binding problem and to proposes the concept of synchrony states within SNN. The
concepts of computing with states are extended to investigate a computational model that is based
on the finite-state machines and reservoir computing. Biological plausible validations of the introduced
model and frameworks are performed. Results and discussions of these validations indicate
that this study presents a significant advance on the way of empowering the knowledge about the
mechanisms underpinning the computational power of CNS. Furthermore it shows a roadmap on
how to adopt the biological computational capabilities in computation science in general and in
biologically-inspired spiking neural networks in specific. Large scale simulations and the development
of neuromorphic hardware are work-in-progress and future work. Among the applications
of the introduced work are neural prostheses and bionic automation systems.
|
38 |
A Framework for Estimating Energy Consumed by Electric Loads Through Minimally Intrusive ApproachesGiri, Suman 01 April 2015 (has links)
This dissertation explores the problem of energy estimation in supervised Non-Intrusive Load Monitoring (NILM). NILM refers to a set of techniques used to estimate the electricity consumed by individual loads in a building from measurements of the total electrical consumption. Most commonly, NILM works by first attributing any significant change in the total power consumption (also known as an event) to a specific load and subsequently using these attributions (i.e. the labels for the events) to estimate energy for each load. For this last step, most proposed solutions in the field impart simplifying assumptions to make the problem more tractable. This has severely limited the practicality of the proposed solutions. To address this knowledge gap, we present a framework for creating appliance models based on classification labels and aggregate power measurements that can help relax many of these assumptions. Within the framework, we model the problem of utilizing a sequence of event labels to generate energy estimates as a broader class of problems that has two major components (i) With the understanding that the labels arise from a process with distinct states and state transitions, we estimate the underlying Finite State Machine (FSM) model that most likely generated the observed sequence (ii) We allow for the observed sequence to have errors, and present an error correction algorithm to detect and correct them. We test the framework on data from 43 appliances collected from 19 houses and find that it improves errors in energy estimates when compared to the case with no correction in 19 appliances by a factor of 50, leaves 17 appliances unchanged, and negatively impacts 6 appliances by a factor of 1.4. This approach of utilizing event sequences to estimate energy has implications in virtual metering of appliances as well. In a case study, we utilize this framework in order to substitute the need of plug-level sensors with cheap and easily deployable contacless sensors, and find that on the 6 appliances virtually metered using magnetic field sensors, the inferred energy values have an average error of 10:9%.
|
39 |
Menings- och dokumentklassficering för identifiering av meningar / Sentence and document classification for identification of sentencesPaulson, Jörgen, Huynh, Peter January 2018 (has links)
Detta examensarbete undersöker hur väl tekniker inom meningsklassificering och dokumentklassificering fungerar för att välja ut meningar som innehåller de variabler som använts i experiment som beskrivs i medicinska dokument. För meningsklassificering används tillståndsmaskiner och nyckelord, för dokumentklassificering används linjär SVM och Random forest. De textegenskaper som har valts ut är LIX (läsbarhetsindex) och ordmängd (word count). Textegenskaperna hämtas från en färdig datamängd som skapades av Abrahamsson (T.B.D) från artiklar som samlas in för denna studie. Denna datamängd används sedan för dokumentklassificering. Det som undersöks hos dokumentklassificeringsteknikerna är förmågan att skilja dokument av typerna vetenskapliga artiklar med experiment, vetenskapliga artiklar utan experiment, vetenskapliga artiklar med metaanalyser och dokument som inte är vetenskapliga artiklar åt. Dessa dokument behandlas med meningsklassificering för att undersöka hur väl denna hittar meningar sominnehåller definitioner av variabler. Resultatet från experimentet tydde på att teknikerna för meningsklassificering inte var dugliga för detta ändamål på grund av låg precision. För dokumentklassificering var Randomforest bäst lämpad men hade problem att skilja olika typer av vetenskapliga artiklar åt.
|
40 |
Uma estratégia para a minimização de máquinas de estados finitos parciais / An approach to incompletely specified finite state machine minimizationAlex Donizeti Betez Alberto 22 April 2009 (has links)
Máquinas de Estados Finitos, além de suas inúmeras aplicações, são amplamente utilizadas na Engenharia de Software para modelar especificações de sistemas. Nesses modelos, projetistas podem inserir, inadvertidamente, estados redundantes, ou seja, que exibem o mesmo comportamento. A eliminação desses estados traz diversos benefícios para as atividades que utilizam o modelo, como menor complexidade e menos recursos físicos para implementação. O processo de eliminação desses estados é denominado minimização, e pode ser realizado em tempo polinomial para máquinas completamente especificadas. Por outro lado, a minimização de máquinas parciais, cuja especificação não cobre todo o domínio de entrada, somente pode ser obtida em tempo polinomial com o uso de abordagens não determinísticas, ou seja, trata-se de um problema NP-Completo. Este trabalho apresenta uma estratégia para a minimização de máquinas de estados finitos parciais que faz o uso de heurísticas e otimizações para tornar o processo mais eficiente. Visando mensurar tal ganho de eficiência, foram realizados experimentos, nos quais os tempos de execução de uma implementação do método proposto foram medidos, juntamente com os tempos de implementações de dois outros métodos conhecidos. Os resultados mostraram vantagens significativas de performance para o novo método em relação aos métodos anteriores / Finite State Machines are largely used on Software Engineering to model systems specifications. In these models, designers may inadvertently include redundant states, i.e., states which exhibit the same input/output behavior. The absence of such states brings benefits to the modeling activities, reducing the complexity and taking less physical resources on implementations. The process of eliminating redundant states is known as minimization, and can be accomplished in polynomial time for completely specified machines. On the other hand, the minimization of partially specified machines, i.e., machines which have undefined behavior for some inputs, can only be done in polynomial time when non-deterministic approaches are applied. It is a known NP-Complete problem. This work presents a deterministic approach to minimize incompletely specified Finite State Machines, using heuristics and optimizations to accomplish the task more efficiently. In order to measure the performance improvements, experiments were done, observing the running time of an implementation of the proposed method, along with running times of implementations of two other known methods. The results revealed a significant performance advantage when using the proposed approach
|
Page generated in 0.0499 seconds