Spelling suggestions: "subject:"cerification**"" "subject:"erification**""
281 |
FPGA-based Implementation of Concatenative Speech Synthesis AlgorithmBamini, Praveen Kumar 29 October 2003 (has links)
The main aim of a text-to-speech synthesis system is to convert ordinary text into an acoustic signal that is indistinguishable from human speech. This thesis presents an architecture to implement a concatenative speech synthesis algorithm targeted to FPGAs. Many current text-to-speech systems are based on the concatenation of acoustic units of recorded speech. Current concatenative speech synthesizers are capable of producing highly intelligible speech. However, the quality of speech often suffers from discontinuities between the acoustic units, due to contextual differences. This is the easiest method to produce synthetic speech. It concatenates prerecorded acoustic elements and forms a continuous speech element. The software implementation of the algorithm is performed in C whereas the hardware implementation is done in structural VHDL. A database of acoustic elements is formed first with recording sounds for different phones. The architecture is designed to concatenate acoustic elements corresponding to the phones that form the target word. Target word corresponds to the word that has to be synthesized. This architecture doesn't address the form discontinuities between the acoustic elements as its ultimate goal is the synthesis of speech. The Hardware implementation is verified on a Virtex (v800hq240-4) FPGA device.
|
282 |
Modélisation à base de Composants de Systèmes Temps réel Hétérogènes en BIPBasu, Ananda 15 December 2008 (has links) (PDF)
We present the BIP component framework for component-based construction of real-time systems from heterogeneous components. BIP stands for Behavior, Interaction and Priority, the three main elements for building components. It considers that systems can be obtained by composition of 3-layered components, where the lowest layer represents behavior as a set of transitions with triggers and actions; the intermediate level is the set of the interactions between the transitions of the behavior level, and the upper level is a set of dynamic priority rules. BIP has a rigorous formal semantics, and supports a system construction methodology based on a parameterized composition operator on components. The use of a layered composition operator allows incremental construction. The system construction process can be viewed as a sequence of transformations in a three-dimensional space of Behavior X Interaction X Priority, and provides a basis for the study of property preserving transformations between subclasses of systems such as untimed/timed, asynchronous/synchronous. We also provide a distributed semantics for systems in BIP, using a partial state model, and show the use of an auxiliary predicate called Oracle to preserve observational equivalence with respect to the centralized semantics. We present an implementation of BIP as a tool chain that allows modeling, static analysis and execution of BIP descriptions on a dedicated engine. Three different implementations for the engine have been provided, a centralized enumerative engine, a centralized symbolic engine and a distributed engine. We illustrate the use of the tool chain for two major application domains, Software Componentization, and Modeling mixed hw/sw systems.
|
283 |
Utilisation des langages d'arbres pour la modélisation et la vérification des systèmes à états infinisPillot, Pierre 04 December 2007 (has links) (PDF)
Ce document présente différents outils pour représenter et manipuler des ensembles infinis de n-uplets d'arbres appelés langages de n-uplets d'arbres. Nous avons choisi la programmation logique comme formalisme pour décrire les langages de n-uplets d'arbre (c.à.d. les relations) et les techniques de transformations de programmes pour calculer les opérations sur ceux-ci. Dans un premier temps on étudie une classe de relations closes par la plupart des opérations ensemblistes, la classe des relations pseudo-régulières. Grâce à un lien entre programmes logiques et systèmes de réécriture, nous définissons des classes de systèmes de réécriture conditionnelle dont la clôture transitive est une relation pseudo-régulière. On applique ce résultat pour donner une classe décidable de formules du premier ordre basées sur le prédicat de joignabilité où R est un système de réécriture conditionnel pseudo-régulier. Ensuite on étend ce résultat au second ordre, les variables du second ordre étant interprétées comme des relations. A partir d'un algorithme général original décidant de la satisfiabilité de formules du second ordre sous certaines conditions, nous représentons une instance des variables du second ordre en système de réécriture conditionnel. On montre que ce travail peut permettre la synthèse automatique de programme. Dans un dernier temps, nous utilisons des sur-approximations pour des tests d'inconsistances. A cet effet, nous utilisons une classe de programmes logiques non réguliers dont le test du vide est décidable pour effectuer la sur-approximation. Nous appliquons finalement cette méthode à la vérification de protocoles cryptographiques.
|
284 |
A System for Driver Identity VerificationHagemann, Andreas, Björk, Hanna January 2005 (has links)
<p>Different security issues are a top subject around the world, especially since the terror threats seem to intensify. In the same time, the transport industry suffer from problems with smuggling and theft of valuable goods. One way to increase the security might be to have a verification system installed in commercial trucks, in order to assure that the driver is the proper one.</p><p>This thesis has two purposes. One is to find appropriate methods for driver verification and build a prototype of a verification system which can be used for testing and further development. The other is to study how truck drivers perceive such a system and how their conception goes along with the growing demand for higher security. The present work is the result of a cooperation between an engineer and a cognitive scientist. The thesis focuses on the transport industry and was performed for Volvo Technology Corporation (VTEC), Gothenburg, Sweden.</p><p>Eleven available verification methods were studied. To enable a well-based selection of methods to implement in the prototype, inquiries and interviews with truck drivers and haulage contractors were carried out to complement the theoretical study. </p><p>One regular and three biometric verification methods were chosen for the test; fingerprint verification, face recognition, voice recognition and PIN verification. These methods were put together to a prototype system that was implemented in a truck simulator. A graphical user interface was developed in order to make the system user friendly. The prototype system was tested by 18 truck drivers. They were thoroughly interviewed before and after the test in order to retrieve their background, expectations and opinions as well as their perceptions and experiences of the test. </p><p>Most of the test participants were positive to the prototype system. Even though they did not feel a need for it today they believed it to “be the future”. However, some participants felt uncomfortable with the system since they felt controlled by it. It became clear how important it is to have a system that respect the users’ privacy and to assure that the users are well informed about how the system is used. Some of the technology used for the verification system requires more development to fit in the automotive context, but it is considered to be possible to achieve a secure and robust system.</p>
|
285 |
Towards Formal Verification in a Component-based Reuse MethodologyKarlsson, Daniel January 2003 (has links)
<p>Embedded systems are becoming increasingly common in our everyday lives. As techonology progresses, these systems become more and more complex. Designers handle this increasing complexity by reusing existing components (Intellectual Property blocks). At the same time, the systems must still fulfill strict requirements on reliability and correctness.</p><p>This thesis proposes a formal verification methodology which smoothly integrates with component-based system-level design using a divide and conquer approach. The methodology assumes that the system consists of several reusable components. Each of these components are already formally verified by their designers and are considered correct given that the environment satisfies certain properties imposed by the component. What remains to be verified is the glue logic inserted between the components. Each such glue logic is verified one at a time using model checking techniques.</p><p>The verification methodology as well as the underlying theoretical framework and algorithms are presented in the thesis.</p><p>Experimental results have shown the efficiency of the proposed methodology and demonstrated that it is feasible to apply it on real-life examples.</p> / Report code: LiU-Tek-Lic-2003:57.
|
286 |
A framework for semantically verifying schema mappings for data exchangeWalny, Jagoda K 06 1900 (has links)
We propose a framework for semi-automatically verifying relational database schema mappings for data exchange.
Schema mappings for data exchange formally describe how to move data between a source and target database. State-of-the-art schema mapping tools propose several mappings, but require user intervention to determine their semantic correctness. For this, the user must understand the domain the schemas represent and the meanings of individual schema elements in relation to the domain.
Our framework eases the task of understanding the domain and schemas and performs preliminary mapping verification. We use a readable, expressive, and formal conceptual model - a domain ontology - to model the source and target schema domain. We model the schema semantics by annotating schema elements with ontology elements. Our mapping verification algorithm rewrites mappings as statements in terms of the ontology, and uses a reasoner to check that the statements are entailed by the ontology.
|
287 |
Software Verification for a Custom Instrument using VectorCAST and CodeSonarWard, Christina Dawn 01 May 2011 (has links)
The goal of this thesis is to apply a structured verification process to a software package using a set of commercially available verification tools. The software package to be verified is adapted from a project that was developed to monitor an industrial machine at the Oak Ridge National Laboratory and includes two major subsystems. One subsystem, referred to as the Industrial Machine Monitoring Instrument (IMMI), connects to a machine and monitors operating parameters using common industrial sensors. A second subsystem, referred to as the Distributed Control System (DCS), interfaces between the IMMI and a personal computer, which provides a human machine interface using a hyperterminal. Both the IMMI and DCS are built around Freescale’s MC9S12XDP microcontroller using CodeWarrior as the Integrated Development Environment (IDE). The software package subjected to the structured verification process includes the main C code with its header file and the code for its interrupt events for the IMMI as well as the main C code for the DCS and its interrupt events. The software package is exposed to the scrutiny of two verification tools, VectorCAST and CodeSonar. VectorCAST is used to execute test cases and provide results for code coverage based on statement and branch coverage. CodeSonar is used to identify issues with the code at compile time such as allocation/deallocation issues, unsafe functions, and language use problems. The results from both verification tools are evaluated and necessary changes made to the software package. The modified software is then tested again with VectorCAST and CodeSonar. The final verification step is downloading the modified code into the IMMI and DCS microcontrollers and testing the overall system to ensure the expected results are achieved with hardware that is developed to simulate realistic signals.
|
288 |
Integration of Verification and Testing into Compilation SystemsBerlin 03 December 2001 (has links) (PDF)
No description available.
|
289 |
Generic Techniques for the verification of infinite-state systemsLegay, Axel 10 December 2007 (has links)
Within the context of the verification of infinite-state systems,
'Regular model checking' is the name of a family of techniques in
which states are represented by words or trees, sets of states by
finite automata on these objects, and transitions by finite automata
operating on pairs of state encodings, i.e. finite-state
transducers. In this context, the problem of computing the set of
reachable states of a system can be reduced to the one of computing
the iterative closure of the finite-state transducer representing its
transition relation. This thesis provides several techniques to
computing the transitive closure of a finite-state transducer. One of
the motivations of the thesis is to show the feasibility and
usefulness of this approach through a combination of the necessary
theoretical developments, implementation, and experimentation. For
systems whose states are encoded by words, the iteration technique
proceeds by comparing a finite sequence of successive powers of the
transducer, detecting an 'increment' that is added to move from one
power to the next, and extrapolating the sequence by allowing
arbitrary repetitions of this increment. For systems whose states are
represented by trees, the iteration technique proceeds by computing
the powers of the transducer and progressively collapsing their states
according to an equivalence relation until a fixed point is reached.
The proposed iteration techniques can just as well be exploited to
compute the closure of a given set of states by repeated applications
of the transducer, which has proven to be a very effective way of
using the technique. Various examples have been handled completely
within the automata-theoretic setting.
Another applications of the techniques are the verification of linear
temporal properties as well as the computation of the convex hull of a
finite set of integer vectors.
|
290 |
Verification and control of o-minimal hybrid systems and weighted timed automataBrihaye, Thomas 02 June 2006 (has links)
La thèse se situe à la charnière de l'informatique théorique et de la logique mathématique. Elle se concentre en particulier sur les aspects mathématiques de la vérification et du contrôle. La thèse se focalise sur l'étude de deux sous-classes d'automates hybrides: les
automates temporisés pondérés et les automates hybrides o-minimaux.
Concernant les automates temporisés pondérés, en
introduisant un nouvel algorithme, nous donnons une caractérisation exacte de la complexité du problème d'atteignabilité optimal en prouvant qu'il est PSpace-complet. Nous prouvons que le model-checking de la logique WCTL est en général
indécidable. Nous nous intéressons alors à une
restriction de la logique WCTL. Nous montrons que
la décidabilité du model-checking de WCTL restreint dépend de la dimension de l'automate et du fait que le temps soit discret ou dense. Finalement pour, nous prouvons que le
problème de contrôle optimal est en général
indécidable. Nous prouvons cependant que ce même problème est décidable pour les automates temporisés pondérés de dimension 1.
En ce qui concerne les automates hybrides o-minimaux, à l'aide d'un encodage symbolique des trajectoires par des mots, nous sommes parvenus à prouver l'existence d'une bisimulation finie pour ces automates. De plus (toujours en utilisant nos encodages des trajectoires par des mots), nous avons obtenu des résultats de décidabilité pour des problèmes de jeux sur ces mêmes automates hybrides o-minimaux. Pour une classe d'automates hybrides o-minimaux, nous avons prouvé (i) que l'existence de stratégie gagnante pouvait être décidée et (ii) que ces stratégies gagnantes pouvaient être synthétisées.
|
Page generated in 0.1088 seconds