• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 11
  • 9
  • 3
  • 1
  • Tagged with
  • 36
  • 36
  • 11
  • 10
  • 10
  • 9
  • 7
  • 6
  • 6
  • 5
  • 5
  • 4
  • 4
  • 4
  • 4
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
21

Evaluating the Robustness of Resource Allocations Obtained through Performance Modeling with Stochastic Process Algebra

Srivastava, Srishti 09 May 2015 (has links)
Recent developments in the field of parallel and distributed computing has led to a proliferation of solving large and computationally intensive mathematical, science, or engineering problems, that consist of several parallelizable parts and several non-parallelizable (sequential) parts. In a parallel and distributed computing environment, the performance goal is to optimize the execution of parallelizable parts of an application on concurrent processors. This requires efficient application scheduling and resource allocation for mapping applications to a set of suitable parallel processors such that the overall performance goal is achieved. However, such computational environments are often prone to unpredictable variations in application (problem and algorithm) and system characteristics. Therefore, a robustness study is required to guarantee a desired level of performance. Given an initial workload, a mapping of applications to resources is considered to be robust if that mapping optimizes execution performance and guarantees a desired level of performance in the presence of unpredictable perturbations at runtime. In this research, a stochastic process algebra, Performance Evaluation Process Algebra (PEPA), is used for obtaining resource allocations via a numerical analysis of performance modeling of the parallel execution of applications on parallel computing resources. The PEPA performance model is translated into an underlying mathematical Markov chain model for obtaining performance measures. Further, a robustness analysis of the allocation techniques is performed for finding a robustmapping from a set of initial mapping schemes. The numerical analysis of the performance models have confirmed similarity with the simulation results of earlier research available in existing literature. When compared to direct experiments and simulations, numerical models and the corresponding analyses are easier to reproduce, do not incur any setup or installation costs, do not impose any prerequisites for learning a simulation framework, and are not limited by the complexity of the underlying infrastructure or simulation libraries.
22

Abordagem algébrica para seleção de clones ótimos em projetos genomas e metagenomas / Algebraic approach to optimal clone selection in genomics and metagenomics projects.

Cantão, Mauricio Egidio 01 December 2009 (has links)
Devido à grande diversidade de microrganismos desconhecidos no meio ambiente, 99% deles não podem ser cultivados nos meios de cultura tradicionais dos laboratórios. Para isso, projetos metagenômicos são propostos para estudar comunidades microbianas presentes no meio ambiente, a partir de técnicas moleculares, em especial o seqüenciamento. Dessa forma, para os próximos anos é esperado um acúmulo de seqüências produzidas por esses projetos. As seqüências produzidas pelos projetos genomas e metagenomas apresentam vários desafios para o tratamento, armazenamento e análise, como exemplo: a busca de clones contendo genes de interesse. Este trabalho apresenta uma abordagem algébrica que define e gerencia de forma dinâmica as regras para a seleção de clones em bibliotecas genômicas e metagenômicas, que se baseiam em álgebra de processos. Além disso, uma interface web foi desenvolvida para permitir que os pesquisadores criem e executem facilmente suas próprias regras de seleção de clones em bancos de dados de seqüências genômicas e metagenômicas. Este software foi testado em bibliotecas genômicas e metagenômicas e foi capaz de selecionar clones contendo genes de interesse. / Due to the wide diversity of unknown organisms in the environment, 99% of them cannot be grown in traditional culture medium in laboratories. Therefore, metagenomics projects are proposed to study microbial communities present in the environment, from molecular techniques, especially the sequencing. Thereby, for the coming years it is expected an accumulation of sequences produced by these projects. Thus, the sequences produced by genomics and metagenomics projects present several challenges for the treatment, storing and analysis such as: the search for clones containing genes of interest. This work presents an algebraic approach that defines it dynamically and manages the rules of the selection of clones in genomic and metagenomic libraries, which are based on process algebra. Furthermore, a web interface was developed to allow researchers to easily create and execute their own rules to select clones in genomic and metagenomic sequence database. This software was tested in genomics and metagenomics libraries and it was able to select clones containing genes of interest.
23

An Intermediate Model for the Verification of Asynchronous Real-Time Embedded Systems: Definition and Application of the ATLANTIF Language

Stoecker, Jan 10 December 2009 (has links) (PDF)
La validation des systèmes critiques réalistes nécessite d'être capable de modéliser et de vérifier formellement des données complexes, du parallélisme asynchrone, et du temps-réel simultanément. Des langages de haut-niveau, comme ceux qui héritent des fondations théoriques des algèbres de processus, ont une syntaxe concise et une grande expressivité pour représenter ces aspects. Cependant, ils disposent de peu d'outils logiciels permettant d'appliquer des algorithmes efficaces du model-checking. Néanmoins, de tels outils existent pour des modèles graphiques, de niveau plus bas, tels que les automates temporisés (par exemple UPPAAL) et les réseaux de Petri temporisés (par exemple TINA). Les modèles intermédiaires sont un moyen pour combler le fossé qui sépare les langages des modèles graphiques. Par exemple, NTIF (New Technology Intermediate Format) a été proposé pour représenter des processus séquentiels non temporisés qui manipulent des données complexes. Dans cette thèse, nous proposons un nouveau modèle nommé ATLANTIF, qui enrichit NTIF de constructions temps-réel et de compositions parallèles de processus séquentiels. Leur synchronisation est exprimée d'une manière simple et intuitive par la nouvelle notion de synchroniseur. Nous montrons qu'ATLANTIF est capable d'exprimer les constructions principales des langages de haut niveau. Nous présentons aussi des traducteurs d'ATLANTIF vers des automates temporisés (pour la vérification avec UPPAAL) et vers des réseaux de Petri temporisés (pour la vérification avec TINA). Ainsi, ATLANTIF étend la classe des systèmes qui peuvent en pratique être vérifiés formellement, ce que nous illustrons par un exemple.
24

Génération automatique d'implémentation distribuée à partir de modèles formels de processus concurrents asynchrones / Automatic Distributed Code Generation from Formal Models of Asynchronous Concurrent Processes

Evrard, Hugues 10 July 2015 (has links)
LNT est un langage formel de spécification récent, basé sur les algèbres de processus, où plusieurs processus concurrents et asynchrones peuvent interagir par rendez-vous multiple, c'est-à-dire à deux ou plus, avec échange de données. La boite à outils CADP (Construction and Analysis of Distributed Processes) offre plusieurs techniques relatives à l'exploration d'espace d'états, comme le model checking, pour vérifier formellement une spécification LNT. Cette thèse présente une méthode de génération d'implémentation distribuée à partir d'un modèle formel LNT décrivant une composition parallèle de processus. En s'appuyant sur CADP, nous avons mis au point le nouvel outil DLC (Distributed LNT Compiler), capable de générer, à partir d'une spécification LNT, une implémentation distribuée en C qui peut ensuite être déployée sur plusieurs machines distinctes reliées par un réseau. Pour implémenter de manière correcte et efficace les rendez-vous multiples avec échange de données entre processus distants, nous avons élaboré un protocole de synchronisation qui regroupe différentes approches proposées par le passé. Nous avons mis au point une méthode de vérification de ce type de protocole qui, en utilisant LNT et CADP, permet de détecter des boucles infinies ou des interblocages dus au protocole, et de vérifier que le protocole réalise des rendez-vous cohérents par rapport à une spécification donnée. Cette méthode nous a permis d'identifier de possibles interblocages dans un protocole de la littérature, et de vérifier le bon comportement de notre propre protocole. Nous avons aussi développé un mécanisme qui permet, en embarquant au sein d'une implémentation des procédures C librement définies par l'utilisateur, de mettre en place des interactions entre une implémentation générée et d'autres systèmes de son environnement. Enfin, nous avons appliqué DLC au nouvel algorithme de consensus Raft, qui nous sert de cas d'étude, notamment pour mesurer les performances d'une implémentation générée par DLC. / LNT is a recent formal specification language, based on process algebras, where several concurrent asynchronous processes can interact by multiway rendezvous (i.e., involving two or more processes), with data exchange. The CADP (Construction and Analysis of Distributed Processes) toolbox offers several techniques related to state space exploration, like model checking, to formally verify an LNT specification. This thesis introduces a distributed implementation generation method, starting from an LNT formal model of a parallel composition of processes. Taking advantage of CADP, we developed the new DLC (Distributed LNT Compiler) tool, which is able to generate, from an LNT specification, a distributed implementation in C that can be deployed on several distinct machines linked by a network. In order to handle multiway rendezvous with data exchange between distant processes in a correct and efficient manner, we designed a synchronization protocol that gathers different approaches suggested in the past. We set up a verification method for this kind of protocol, which, using LNT and CADP, can detect livelocks or deadlocks due to the protocol, and also check that the protocol leads to valid interactions with respect to a given specification. This method allowed us to identify possible deadlocks in a protocol from the literature, and to verify the good behavior of our own protocol. We also designed a mechanism that enables the final user, by embedding user-defined C procedures into the implementation, to set up interactions between the generated implementation and other systems in the environment. Finally, we used the new consensus algorithm Raft as a case study, in particular to measure the performances of an implementation generated by DLC.
25

Formalisation of SysML design models and an analysis strategy using refinement

LIMA, Lucas Albertins de 03 March 2016 (has links)
Submitted by Fabio Sobreira Campos da Costa (fabio.sobreira@ufpe.br) on 2016-08-08T12:10:14Z No. of bitstreams: 2 license_rdf: 1232 bytes, checksum: 66e71c371cc565284e70f40736c94386 (MD5) v_final_assinaturas_branco.pdf: 10378086 bytes, checksum: 35e52eff52531ee36b6a5af5b2a20645 (MD5) / Made available in DSpace on 2016-08-08T12:10:14Z (GMT). No. of bitstreams: 2 license_rdf: 1232 bytes, checksum: 66e71c371cc565284e70f40736c94386 (MD5) v_final_assinaturas_branco.pdf: 10378086 bytes, checksum: 35e52eff52531ee36b6a5af5b2a20645 (MD5) Previous issue date: 2016-03-03 / The increasing complexity of systems has led to increasing difficulty in design. Thestandard approach to development, based on trial and error, with testing used at later stages toidentify errors, is costly and leads to unpredictable delivery times. In addition, for critical systems,for which safety is a major concern, early verification and validation (V&V) is recognised asa valuable approach to promote dependability. In this context, we identify three important anddesirable features of a V&V technique: (i) a graphical modelling language; (ii) formal andrigorous reasoning, and (iii) automated support for modelling and reasoning. We address these points with a refinement technique for SysML supported by tools. SysML is a UML-based language for systems design; it has itself become a de facto standard in the area. There is wide availability of tool support from vendors like IBM, Atego, and Sparx Systems. Our work is distinctive in two ways: a semantics for refinement and for a representative collection of elements from the UML4SysML profile (blocks, state machines, activities, and interactions) used in combination. We provide a means to analyse design models specified using SysML. This facilitates the discovery of problems earlier in the system development lifecycle, reducing time and costs of production. In this work we describe our semantics, which is defined using a state-rich process algebra called CML and implemented in a tool for automatic generation of formal models. We also show how the semantics can be used for refinement-based analysis and development. Our case studies are a leadership-election protocol, a critical component of an industrial application, and a dwarf signal, a device used to control rail traffic. Our contributions are: a set of guidelines that provide meaning to the different modelling elements of SysML used during the design of systems; the individual formal semantics for SysML activities, blocks and interactions; an integrated semantics that combines these semantics with another defined for state machines; and a framework for reasoning using refinement about systems specified by collections of SysML diagrams. / O aumento da complexidade dos sistemas tem levado a um aumento na dificuldade da atividade de projeto. A abordagem padrão para desenvolvimento, baseada em tentativa e erro, com testes usados em estágios avançados para identificar erros, é custosa e leva a prazos de entrega imprevisíveis. Além disto, para sistemas críticos, para os quais segurança é um conceito chave, Verificação e Validação (V&V) com antecedência é reconhecida como uma abordagem valiosa para promover confiança. Neste contexto, nós identificamos três características importantes e desejáveis de uma técnica de V&V: (i) uma linguagem de modelagem gráfica; (ii) raciocínio formal e rigoroso, e (iii) suporte automático para modelagem e raciocínio. Nós tratamos estes pontos com uma técnica de refinamento para SysML apoiada por ferramentas. SysML é uma linguagem baseada na UML para o projeto de sistemas. Ela tem se tornado um padrão de facto na área. Há uma grande disponibilidade de ferramentas de fornecedores como IBM, Atego, e Sparx Systems. Nosso trabalho se destaca de duas maneiras: ao fornecer uma semântica para refinamento e considerar uma coleção representativa de elementos do perfil UML4SysML (blocos, máquina de estados, atividades, e interações) usados de forma combinada. Nós fornecemos uma estratégia para analisar modelos de projeto especificados em SysML. Isto facilita a descoberta de problemas mais cedo durante o ciclo de vida de desenvolvimento de sistemas, reduzindo tempo e custos de produção. Neste trabalho nós descrevemos nossa semântica a qual é definida usando uma álgebra de processo rica em estado chamada CML e implementada em uma ferramenta para geração automática de modelos formais. Nós também mostramos como esta semântica pode ser usada para análise baseada em refinamento. Nossos estudos de caso são um protocolo de eleição de líder, o qual é um componente crítico de uma aplicação industrial, e um sinal anão, o qual é um dispositivo para controlar tráfego em linhas férreas. Nossas contribuições são: um conjunto de orientações que fornecem significado para os diferentes elementos de modelagem de SysML usados durante o projeto de sistemas; as semânticas formais individuais para atividades, blocos e interações de SysML; uma semântica integrada que combina estas semânticas com outra definida para máquina de estados; e um arcabouço que usa refinamento para raciocínio de sistemas especificados por coleções de diagramas SysML.
26

Abordagem algébrica para seleção de clones ótimos em projetos genomas e metagenomas / Algebraic approach to optimal clone selection in genomics and metagenomics projects.

Mauricio Egidio Cantão 01 December 2009 (has links)
Devido à grande diversidade de microrganismos desconhecidos no meio ambiente, 99% deles não podem ser cultivados nos meios de cultura tradicionais dos laboratórios. Para isso, projetos metagenômicos são propostos para estudar comunidades microbianas presentes no meio ambiente, a partir de técnicas moleculares, em especial o seqüenciamento. Dessa forma, para os próximos anos é esperado um acúmulo de seqüências produzidas por esses projetos. As seqüências produzidas pelos projetos genomas e metagenomas apresentam vários desafios para o tratamento, armazenamento e análise, como exemplo: a busca de clones contendo genes de interesse. Este trabalho apresenta uma abordagem algébrica que define e gerencia de forma dinâmica as regras para a seleção de clones em bibliotecas genômicas e metagenômicas, que se baseiam em álgebra de processos. Além disso, uma interface web foi desenvolvida para permitir que os pesquisadores criem e executem facilmente suas próprias regras de seleção de clones em bancos de dados de seqüências genômicas e metagenômicas. Este software foi testado em bibliotecas genômicas e metagenômicas e foi capaz de selecionar clones contendo genes de interesse. / Due to the wide diversity of unknown organisms in the environment, 99% of them cannot be grown in traditional culture medium in laboratories. Therefore, metagenomics projects are proposed to study microbial communities present in the environment, from molecular techniques, especially the sequencing. Thereby, for the coming years it is expected an accumulation of sequences produced by these projects. Thus, the sequences produced by genomics and metagenomics projects present several challenges for the treatment, storing and analysis such as: the search for clones containing genes of interest. This work presents an algebraic approach that defines it dynamically and manages the rules of the selection of clones in genomic and metagenomic libraries, which are based on process algebra. Furthermore, a web interface was developed to allow researchers to easily create and execute their own rules to select clones in genomic and metagenomic sequence database. This software was tested in genomics and metagenomics libraries and it was able to select clones containing genes of interest.
27

Analyse de ressources pour les systèmes concurrents dynamiques / Resource analysis for concurrent and dynamic systems

Deharbe, Aurélien 21 September 2016 (has links)
Durant leur exécution, les systèmes concurrents manipulent diverses ressources dynamiques en nombre : fichiers, liens de communication, mémoire, etc. Les propriétés comportementales de ces systèmes sont alors étroitement liées aux manipulations de ces ressources qu'ils allouent, utilisent, puis détruisent. Nous proposons dans cette thèse une analyse quantitative, effectuée de manière statique, de ce type de ressources pour les systèmes concurrents et dynamiques. Les systèmes que l'on considère peuvent être des programmes concurrents et parallèles (le langage Piccolo développé dans le cadre de ce travail en est un exemple), ou encore la modélisation de systèmes plus généraux. Pour atteindre cette généricité, notre travail repose fortement sur les algèbres de processus, et plus particulièrement sur le pi-calcul pour lequel nous proposons une variante sémantique ainsi que plusieurs abstractions adaptées à l'observation des ressources en particulier. Le socle théorique de notre analyse est présenté sous la forme d'un nouveau type d'automates nominaux : les nu-automates. Ils permettent de raisonner spécifiquement sur les ressources dynamiques, tant pour caractériser les notions quantitatives de consommation en ressources que pour de futures analyses qualitatives. À partir de ce formalisme nous réalisons ensuite un ensemble d'algorithmes ayant pour but de mettre en oeuvre les résultats introduits sur les nu-automates. Enfin, nous proposons plusieurs expérimentations, sur la base d'exemples classiques du pi-calcul, de notre prototype d'analyse de ressources. / Concurrent activities involve undoubtedly many dynamic resources manupulations: files, communication links, memory, etc. Then, the behavioral properties of such systems are closely linked to their usage of those resources that they allocate, use, and finally destroy. In this work, we develop a quantitative static analysis of concurrent and parallel systems for this kind of resources. Systems that we consider can be concurrent and parallel programs (written for example in the Piccolo programming language which was developped during this thesis), or models descriptions of more general systems. To be generic, our work lies on process algebra, specifically pi-calculs for which we propose a variant semantics in addition to several resources abstractions strategies. The underlying theory is developped as a nominal automata framework (namely the nu-automata). They allow one to reason about dynamic resources usage to charaterize both quantitative and qualitative properties. From this formalism we establish an algorithmic framework that enforce the qualitative results defined on nu-automata. Finally, our resources abstractions and resources analysis are tested experimentally on classical pi-calculus examples using our prototype analysis tool.
28

Techniques and tools for the verification of concurrent systems

Palikareva, Hristina January 2012 (has links)
Model checking is an automatic formal verification technique for establishing correctness of systems. It has been widely used in industry for analysing and verifying complex safety-critical systems in application domains such as avionics, medicine and computer security, where manual testing is infeasible and even minor errors could have dire consequences. In our increasingly parallelised world, concurrency has become pivotal and seamlessly woven within programming paradigms, however, extremely challenging when it comes to modelling and establishing correctness of intended behaviour. Tools for model checking concurrent systems face severe limitations due to scalability problems arising from the need to examine all possible interleavings (schedules) of executions of parallel components. Moreover, concurrency poses additional challenges to model checking, giving rise to phenomena such as nondeterminism, deadlock, livelock, etc. In this thesis we focus on adapting and developing novel model-checking techniques for concurrent systems in the setting of the process algebra CSP and its primary model checker FDR. CSP allows for a compact modelling and precise analysis of event-based concurrency, grounded on synchronous message passing as a fundamental mechanism of inter-component communication. In particular, we investigate techniques based on symbolic model checking, static analysis and abstraction, all of them exploiting the compositionality inherent in CSP and targeting to increase the scale of systems that can be tractably analysed. Firstly, we investigate symbolic model-checking techniques based on Boolean satisfiability (SAT), which we adapt for the traces model of CSP. We tailor bounded model checking (BMC), that can be used for bug detection, and temporal k-induction, which aims at establishing inductiveness of properties and is capable of both bug finding and establishing the correctness of systems. Secondly, we propose a static analysis framework for establishing livelock freedom of CSP processes, with lessons for other concurrent formalisms. As opposed to traditional exhaustive state-space exploration, our framework employs a system of rules on the syntax of a process to calculate a sound approximation of its fair/co-fair sets of events. The rules either safely classify a process as livelock-free or report inconclusiveness, thereby trading accuracy for speed. Finally, we develop a series of abstraction/refinement schemes for the traces, stable-failures and failures-divergences models of CSP and embed them into a fully automated and compositional CEGAR framework. For each of those techniques we present an implementation and an experimental evaluation on a set of CSP benchmarks.
29

Geração de expressões algébricas para processos de negócio usando reduções de digrafos série-paralelo / Generation of algebraic expressions for business processes using reductions on series-parallel digraphs

Oikawa, Márcio Katsumi 25 September 2008 (has links)
Modelagem e controle de execução são duas abordagens do gerenciamento de processos de negócio que, embora complementares, têm se desenvolvido independentemente. Por um lado, a modelagem é normalmente conduzida por especialistas de negócio e explora aspectos semânticos do processo. Por outro lado, o controle de execução estuda mecanismos consistentes e eficientes de implementação. Este trabalho apresenta um método algorítmico que relaciona modelagem e controle de execução, por meio da geração de expressões algébricas a partir de digrafos acíclicos. Por hipótese, assumimos que modelos de processos de negócio são formados por estruturas baseadas em grafos, e mecanismos de controle de execução são baseados na interpretação de expressões de álgebra de processos. Para a geração de expressões algébricas, esta tese apresenta as propriedades topológicas de digrafos série-paralelo e define um sistema de transformação baseado em redução de digrafos. Além disso, um algoritmo de identificação de digrafos série-paralelo e geração de expressões algébricas é apresentado. O texto também discute o tratamento de digrafos que não são série-paralelo e apresenta, para alguns desses casos, soluções baseadas em mudanças topológicas. Finalmente, o algoritmo é ilustrado com o estudo de caso de uma aplicação real. / Modeling and execution control are complementary approaches of business process management that have been developed independently. On one hand, modeling is usually performed by business specialists and explores semantical aspects of the business process. On other hand, execution control studies consistent and efficient mechanisms for implementation. This work presents an algorithmic method which joins modeling and execution control through algebraic expression generation from acyclic digraphs. By hypothesis, we assume that business process models are defined by graph structures, and execution control mechanisms are based on interpretation of process algebra expressions. For algebraic expression generation, this thesis presents the topological properties of series-parallel digraphs and defines a transformation system based on digraph reduction. Therefore, we present an algorithm for identification of series-parallel digraphs and generation of algebraic expressions. This work also discusses the treatment of non-series-parallel digraphs and presents solutions based on topological changing for some cases. Finally, the algorithm is illustrated with a case study based on a real system.
30

Formal methods for functional verification of cache-coherent systems-on-chip / Méthodes Formelles pour la vérification fonctionnelle des systèmes sur puce cache cohérent

Kriouile, Abderahman 17 September 2015 (has links)
Les architectures des systèmes sur puce (System-on-Chip, SoC) actuelles intègrent de nombreux composants différents tels que les processeurs, les accélérateurs, les mémoires et les blocs d'entrée/sortie, certains pouvant contenir des caches. Vu que l'effort de validation basée sur la simulation, actuellement utilisée dans l'industrie, croît de façon exponentielle avec la complexité des SoCs, nous nous intéressons à des techniques de vérification formelle. Nous utilisons la boîte à outils CADP pour développer et valider un modèle formel d'un SoC générique conforme à la spécification AMBA 4 ACE récemment proposée par ARM dans le but de mettre en œuvre la cohérence de cache au niveau système. Nous utilisons une spécification orientée contraintes pour modéliser les exigences générales de cette spécification. Les propriétés du système sont vérifié à la fois sur le modèle avec contraintes et le modèle sans contraintes pour détecter les cas intéressants pour la cohérence de cache. La paramétrisation du modèle proposé a permis de produire l'ensemble complet des contre-exemples qui ne satisfont pas une certaine propriété dans le modèle non contraint. Notre approche améliore les techniques industrielles de vérification basées sur la simulation en deux aspects. D'une part, nous suggérons l'utilisation du modèle formel pour évaluer la bonne construction d'une unité de vérification d'interface. D'autre part, dans l'objectif de générer des cas de test semi-dirigés intelligents à partir des propriétés de logique temporelle, nous proposons une approche en deux étapes. La première étape consiste à générer des cas de tests abstraits au niveau système en utilisant des outils de test basé sur modèle de la boîte à outils CADP. La seconde étape consiste à affiner ces tests en cas de tests concrets au niveau de l'interface qui peuvent être exécutés en RTL grâce aux services d'un outil commercial de génération de tests dirigés par les mesures de couverture. Nous avons constaté que notre approche participe dans la transition entre la vérification du niveau interface, classiquement pratiquée dans l'industrie du matériel, et la vérification au niveau système. Notre approche facilite aussi la validation des propriétés globales du système, et permet une détection précoce des bugs, tant dans le SoC que dans les bancs de test commerciales. / State-of-the-art System-on-Chip (SoC) architectures integrate many different components, such as processors, accelerators, memories, and I/O blocks. Some of those components, but not all, may have caches. Because the effort of validation with simulation-based techniques, currently used in industry, grows exponentially with the complexity of the SoC, this thesis investigates the use of formal verification techniques in this context. More precisely, we use the CADP toolbox to develop and validate a generic formal model of a heterogeneous cache-coherent SoC compliant with the recent AMBA 4 ACE specification proposed by ARM. We use a constraint-oriented specification style to model the general requirements of the specification. We verify system properties on both the constrained and unconstrained model to detect the cache coherency corner cases. We take advantage of the parametrization of the proposed model to produce a comprehensive set of counterexamples of non-satisfied properties in the unconstrained model. The results of formal verification are then used to improve the industrial simulation-based verification techniques in two aspects. On the one hand, we suggest using the formal model to assess the sanity of an interface verification unit. On the other hand, in order to generate clever semi-directed test cases from temporal logic properties, we propose a two-step approach. One step consists in generating system-level abstract test cases using model-based testing tools of the CADP toolbox. The other step consists in refining those tests into interface-level concrete test cases that can be executed at RTL level with a commercial Coverage-Directed Test Generation tool. We found that our approach helps in the transition between interface-level and system-level verification, facilitates the validation of system-level properties, and enables early detection of bugs in both the SoC and the commercial test-bench.

Page generated in 0.4362 seconds