• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 141
  • 34
  • 22
  • 7
  • 4
  • 2
  • 1
  • 1
  • 1
  • Tagged with
  • 232
  • 232
  • 232
  • 49
  • 44
  • 40
  • 38
  • 36
  • 36
  • 34
  • 32
  • 31
  • 31
  • 29
  • 28
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
221

Energy-aware real-time scheduling in embedded multiprocessor systems / Ordonnancement temps réel dans les systèmes embarqués multiprocesseurs contraints par l'énergie

Nélis, Vincent 18 October 2010 (has links)
Nowadays, computer systems are everywhere. From simple portable devices such as watches and MP3 players to large stationary installations that control nuclear power plants, computer systems are now present in all aspects of our modern and every-day life. In about only 70 years, they have completely perturbed our way of life and they reached a so high degree of sophistication that they will be soon capable of driving our cars and cleaning our houses without any human intervention. As computer systems gain in responsibilities, it becomes essential that they provide both safety and reliability. Indeed, a failure in systems such as the anti-lock braking system (ABS) in cars could threaten human lives and generate catastrophic and irreversible consequences. Hence, for many years, researchers have addressed these emerging problems of system safety and reliability which come along with this fulgurant evolution. <p><p>This thesis provides a general overview of embedded real-time computer systems, i.e. a particular kind of computer system whose number grows daily. We provide the reader with some preliminary knowledge and a good understanding of the concepts that underlie this emerging technology. We focus especially on the theoretical problems related to the real-time issue and briefly summarizes the main solutions, together with their advantages and drawbacks. This brings the reader through all the conceptual layers constituting a computer system, from the software level---the logical part---that specifies both the system behavior and requirements to the hardware level---the physical part---that actually performs the expected treatments and reacts to the environment. In the meanwhile, we introduce the theoretical models that allow researchers for theoretical analyses which ensure that all the system requirements are fulfilled. Finally, we address the energy consumption problem in embedded systems. We describe the various factors of power dissipation in modern technologies and we introduce different solutions to reduce this consumption./Cette thèse se focalise sur un type de systèmes informatiques bien précis appelés “systèmes embarqués temps réel”. Un système est dit “embarqué” lorsqu’il est développé afin de servir un but bien précis. Un téléphone portable est un parfait exemple de système embarqué étant donné que toutes ses fonctionnalités sont rigoureusement définies avant même sa conception. Au contraire, un ordinateur personnel n’est généralement pas considéré comme un système embarqué, les concepteurs ne sachant pas à l’avance à quelles fins il sera utilisé. Une grande partie de ces systèmes embarqués ont des contraintes temporelles très fortes, ce qui les distingue encore plus des ordinateurs grand public. A titre d’exemple, lorsqu’un conducteur de voiture freine brusquement, l’ordinateur de bord déclenche l’application ABS et il est primordial que cette application soit traitée endéans une courte échéance. Autrement dit, cette fonctionnalité ABS doit être traitée prioritairement par rapport aux autres fonctionnalités du véhicule. Ce type de système embarqué est alors dit “temps réel”, dû à ces notions de temps et de priorités entre les applications. La problèmatique posée par les systèmes temps réel est la suivante. Comment déterminer, à tout moment, un ordre d’exécution des différentes fonctionnalités de telle sorte qu’elles soient toutes exécutées entièrement endéans leur échéance ?De plus, avec l’apparition récente des systèmes multiprocesseurs, cette problématique s’est fortement complexifiée, vu que le système doit à présent déterminer quelle fonctionnalité s’exécute à quel moment sur quel processeur afin que toutes les contraintes temporelles soient respectées. Pour finir, ces systèmes embarqués temp réel multiprocesseurs se sont rapidement retrouvés confrontés à un problème de consommation d’énergie. Leur demande en terme de performance (et donc en terme d’énergie) à évolué beaucoup plus rapidement que la capacité des batteries qui les alimentent. Ce problème est actuellement rencontré par de nombreux systèmes, tels que les téléphones portables par exemple. L’objectif de cette thèse est de parcourir les différents composants de tels système embarqués et de proposer des solutions afin de réduire leur consommation d’énergie. / Doctorat en Sciences / info:eu-repo/semantics/nonPublished
222

Precise Analysis of Private And Shared Caches for Tight WCET Estimates

Nagar, Kartik January 2016 (has links) (PDF)
Worst Case Execution Time (WCET) is an important metric for programs running on real-time systems, and finding precise estimates of a program’s WCET is crucial to avoid over-allocation and wastage of hardware resources and to improve the schedulability of task sets. Hardware Caches have a major impact on a program’s execution time, and accurate estimation of a program’s cache behavior generally leads to significant reduction of its estimated WCET. However, the cache behavior of an access cannot be determined in isolation, since it depends on the access history, and in multi-path programs, the sequence of accesses made to the cache is not fixed. Hence, the same access can exhibit different cache behavior in different execution instances. This issue is further exacerbated in shared caches in a multi-core architecture, where interfering accesses from co-running programs on other cores can arrive at any time and modify the cache state. Further, cache analysis aimed towards WCET estimation should be provably safe, in that the estimated WCET should always exceed the actual execution time across all execution instances. Faced with such contradicting requirements, previous approaches to cache analysis try to find memory accesses in a program which are guaranteed to hit the cache, irrespective of the program input, or the interferences from other co-running programs in case of a shared cache. To do so, they find the worst-case cache behavior for every individual memory access, analyzing the program (and interferences to a shared cache) to find whether there are execution instances where an access can super a cache miss. However, this approach loses out in making more precise predictions of private cache behavior which can be safely used for WCET estimation, and is significantly imprecise for shared cache analysis, where it is often impossible to guarantee that an access always hits the cache. In this work, we take a fundamentally different approach to cache analysis, by (1) trying to find worst-case behavior of groups of cache accesses, and (2) trying to find the exact cache behavior in the worst-case program execution instance, which is the execution instance with the maximum execution time. For shared caches, we propose the Worst Case Interference Placement (WCIP) technique, which finds the worst-case timing of interfering accesses that would cause the maximum number of cache misses on the worst case execution path of the program. We first use Integer Linear Programming (ILP) to find an exact solution to the WCIP problem. However, this approach does not scale well for large programs, and so we investigate the WCIP problem in detail and prove that it is NP-Hard. In the process, we discover that the source of hardness of the WCIP problem lies in finding the worst case execution path which would exhibit the maximum execution time in the presence of interferences. We use this observation to propose an approximate algorithm for performing WCIP, which bypasses the hard problem of finding the worst case execution path by simply assuming that all cache accesses made by the program occur on a single path. This allows us to use a simple greedy algorithm to distribute the interfering accesses by choosing those cache accesses which could be most affected by interferences. The greedy algorithm also guarantees that the increase in WCET due to interferences is linear in the number of interferences. Experimentally, we show that WCIP provides substantial precision improvement in the final WCET over previous approaches to shared cache analysis, and the approximate algorithm almost matches the precision of the ILP-based approach, while being considerably faster. For private caches, we discover multiple scenarios where hit-miss predictions made by traditional Abstract Interpretation-based approaches are not sufficient to fully capture cache behavior for WCET estimation. We introduce the concept of cache miss paths, which are abstractions of program path along which an access can super a cache miss. We propose an ILP-based approach which uses cache miss paths to find the exact cache behavior in the worst-case execution instance of the program. However, the ILP-based approach needs information about the worst-case execution path to predict the cache behavior, and hence it is difficult to integrate it with other micro-architectural analysis. We then show that most of the precision improvement of the ILP-based approach can be recovered without any knowledge of the worst-case execution path, by a careful analysis of the cache miss paths themselves. In particular, we can use cache miss paths to find the worst-case behavior of groups of cache accesses. Further, we can find upper bounds on the maximum number of times that cache accesses inside loops can exhibit worst-case behavior. This results in a scalable, precise method for performing private cache analysis which can be easily integrated with other micro-architectural analysis.
223

An integrated sensor system for early fall detection

Bandi, Ajay Kumar 05 1900 (has links)
Indiana University-Purdue University Indianapolis (IUPUI) / Physical activity monitoring using wearable sensors give valuable information about patient's neuro activities. Fall among ages of 60 and older in US is a leading cause for injury-related health issues and present serious concern in the public health care sector. If the emergency treatments are not on time, these injuries may result in disability, paralysis, or even death. In this work, we present an approach that early detect fall occurrences. Low power capacitive accelerometers incorporated with microcontroller processing units were utilized to early detect accurate information about fall events. Decision tree algorithms were implemented to set thresholds for data acquired from accelerometers. Data is then verified against their thresholds and the data acquisition decision unit makes the decision to save patients from fall occurrences. Daily activities are logged on an onboard memory chip with Bluetooth option to transfer the data wirelessly to mobile devices. In this work, a system prototype based on neurosignal activities was built and tested against seven different daily human activities for the sake of differentiating between fall and non-fall detection. The developed system features low power, high speed, and high reliability. Eventually, this study will lead to wearable fall detection system that serves important need within the health care sector. In this work Inter-Integrated Circuit (I2C) protocol is used to communicate between the accelerometers and the embedded control system. The data transfer from the Microcontroller unit to the mobile device or laptop is done using Bluetooth technology.
224

Simulator for optimizing performance and power of embedded multicore processors

Goska, Benjamin J. 26 April 2012 (has links)
This work presents improvements to a multi-core performance/power simulator. The improvements which include updated power models, voltage scaling aware models, and an application specific benchmark, are done to increase the accuracy of power models under voltage and frequency scaling. Improvements to the simulator enable more accurate design space exploration for a biomedical application. The work flow used to modify the simulator is also presented so similar modifications could be used on future simulators. / Graduation date: 2012
225

Physical Design of Optoelectronic System-on-a-Chip/Package Using Electrical and Optical Interconnects: CAD Tools and Algorithms

Seo, Chung-Seok 19 November 2004 (has links)
Current electrical systems are faced with the limitation in performance by the electrical interconnect technology determining overall processing speed. In addition, the electrical interconnects containing many long distance interconnects require high power to drive. One of the best ways to overcome these bottlenecks is through the use of optical interconnect to limit interconnect latency and power. This research explores new computer-aided design algorithms for developing optoelectronic systems. These algorithms focus on place and route problems using optical interconnections covering system-on-a-chip design as well as system-on-a-package design. In order to design optoelectronic systems, optical interconnection models are developed at first. The CAD algorithms include optical interconnection models and solve place and route problems for optoelectronic systems. The MCNC and GSRC benchmark circuits are used to evaluate these algorithms.
226

Modelagem do prognóstico e gestão da saúde de máquinas mecânicas no contexto de sistemas ciberfísicos na manufatura / Prognostics and health management modelling of mechanical machines in the context of cyber-physical systems in mnufacturing

Nunez, David Lira 14 September 2017 (has links)
Os recentes avanços na manufatura inteligente abrem oportunidades na área do suporte industrial, especificamente na manutenção e gestão de ativos físicos. Essa tendência permite que os dados coletados das máquinas, quando estão em pleno funcionamento, possam interagir com computadores, (ciberespaço), através de uma rede de comunicação formando assim o conceito de Sistemas Ciberfísicos (CPS – do inglês Cyber-Physical Systems). Além disso, os rápidos avanços da tecnologia de informação e comunicação proporcionam ferramentas para que essa interação possa analisar os dados, de forma cada vez mais rápida, autônoma, ubíqua e em tempo real, oferecendo informações que auxiliam aos humanos na tomada de decisões mais eficazes. Nesse contexto, o Prognóstico e Gestão da Saúde de máquinas (PHM – do inglês Prognostics Health and Management) é indicado como uma aplicação promissora da manufatura inteligente dentro do contexto de CPS. Atualmente as propostas de PHM encontradas na literatura cientifica são aplicadas a casos específicos e sem uma padronização da sua implementação, impedindo que tais abordagens possam ser replicadas em diferentes cenários da manufatura. Assim, o presente trabalho propõe a construção de um modelo ontológico para auxiliar na implementação do PHM em diversos cenários de manufatura, a ser aproveitada futuramente por ferramentas de softwares com foco em manufatura inteligente, padronizando seus conceitos, termos, e a forma de coleta e tratamento de dados. A abordagem metodológica DSR (do inglês Design Science Research) é usada para guiar o desenvolvimento da pesquisa. A construção deste modelo ontológico, que integra tanto os dados coletados quanto as informações necessárias para a tomada de decisões, possibilita o controle da estimativa de uma falha antes dela ocorrer de uma forma mais autônoma. Os principais resultados do modelo ontológico construído são: uma ontologia flexível capaz de ser usada em vários tipos de máquinas mecânicas de diversos tipos de manufatura; a possibilidade de armazenar o conhecimento contido em normas internacionais, históricos de atividades das máquinas, e arquiteturas consolidadas no contexto do PHM, permitindo a constante atualização de dados dependendo de particularidades que cada processo produtivo pode conter, e finalmente, usando a linguagem SPARQL entrega-se informações que podem ser usadas para tomada de decisões em intervenções oportunas de manutenção nos equipamentos de uma indústria real. O modelo é demonstrado considerando o caso de uma bomba centrífuga que comprovou sua fidelidade, integridade, nível de detalhe, robustez e consistência, fornecendo informações anteriormente alimentadas por dados reais obtidos em empresas próximas. / Recent advances in smart manufacturing open up opportunities in industrial support, specifically in maintenance and physical asset management. This trend allows data collected from machines, when are in full operation, to interact with cyberspace computers through a communication network, thus forming the concept of cyber-physical systems (CPS). Besides, rapid advances in information and communications technologies provide tools for analysing data, in an increasingly rapid, autonomously, ubiquitous and in real time way, providing information that assists humans in making better decisions. In this sense, Prognostics and Health Management (PHM) of machines, is indicated as a promising application of Smart Manufacturing in the CPS contexto. Currently the PHM proposals found in the scientific literature are applied to specific cases and without a standardization of their implementation, preventing such approaches from being replicated in different manufacturing scenarios. Thus, the present work proposes the construction of an ontological model to assist in the implementation of the PHM in several manufacturing scenarios, to be harnessed in the future by software tools focused on intelligent manufacturing, standardizing their concepts, terms, and the form of data collection and processing. The DSR (Design Science Research) methodological approach is used to guide the development of the research. The construction of this ontological model, which integrates both the collected data and the necessary information for the decision making, allows the control of the estimate of a failure before it occurs in a more autonomous way. The main results of the ontological model are: a flexible ontology capable of being used on several types of mechanical machines of various types of manufacturing; the possibility of storing the knowledge contained in international standards, machine history of activities, and consolidated architectures in the context of the PHM, allows the constant updating of data depending on particularities that each productive process can contain, and finally, using the SPARQL language, it is given information that can be used for decisionmaking in timely maintenance interventions in the equipment of a real industry. The model is demonstrated considering the case of a centrifugal pump that proved its fidelity, integrity, level of detail, robustness and consistency, providing information previously fed by real data obtained in nearby companies.
227

Modelagem do prognóstico e gestão da saúde de máquinas mecânicas no contexto de sistemas ciberfísicos na manufatura / Prognostics and health management modelling of mechanical machines in the context of cyber-physical systems in mnufacturing

Nunez, David Lira 14 September 2017 (has links)
Os recentes avanços na manufatura inteligente abrem oportunidades na área do suporte industrial, especificamente na manutenção e gestão de ativos físicos. Essa tendência permite que os dados coletados das máquinas, quando estão em pleno funcionamento, possam interagir com computadores, (ciberespaço), através de uma rede de comunicação formando assim o conceito de Sistemas Ciberfísicos (CPS – do inglês Cyber-Physical Systems). Além disso, os rápidos avanços da tecnologia de informação e comunicação proporcionam ferramentas para que essa interação possa analisar os dados, de forma cada vez mais rápida, autônoma, ubíqua e em tempo real, oferecendo informações que auxiliam aos humanos na tomada de decisões mais eficazes. Nesse contexto, o Prognóstico e Gestão da Saúde de máquinas (PHM – do inglês Prognostics Health and Management) é indicado como uma aplicação promissora da manufatura inteligente dentro do contexto de CPS. Atualmente as propostas de PHM encontradas na literatura cientifica são aplicadas a casos específicos e sem uma padronização da sua implementação, impedindo que tais abordagens possam ser replicadas em diferentes cenários da manufatura. Assim, o presente trabalho propõe a construção de um modelo ontológico para auxiliar na implementação do PHM em diversos cenários de manufatura, a ser aproveitada futuramente por ferramentas de softwares com foco em manufatura inteligente, padronizando seus conceitos, termos, e a forma de coleta e tratamento de dados. A abordagem metodológica DSR (do inglês Design Science Research) é usada para guiar o desenvolvimento da pesquisa. A construção deste modelo ontológico, que integra tanto os dados coletados quanto as informações necessárias para a tomada de decisões, possibilita o controle da estimativa de uma falha antes dela ocorrer de uma forma mais autônoma. Os principais resultados do modelo ontológico construído são: uma ontologia flexível capaz de ser usada em vários tipos de máquinas mecânicas de diversos tipos de manufatura; a possibilidade de armazenar o conhecimento contido em normas internacionais, históricos de atividades das máquinas, e arquiteturas consolidadas no contexto do PHM, permitindo a constante atualização de dados dependendo de particularidades que cada processo produtivo pode conter, e finalmente, usando a linguagem SPARQL entrega-se informações que podem ser usadas para tomada de decisões em intervenções oportunas de manutenção nos equipamentos de uma indústria real. O modelo é demonstrado considerando o caso de uma bomba centrífuga que comprovou sua fidelidade, integridade, nível de detalhe, robustez e consistência, fornecendo informações anteriormente alimentadas por dados reais obtidos em empresas próximas. / Recent advances in smart manufacturing open up opportunities in industrial support, specifically in maintenance and physical asset management. This trend allows data collected from machines, when are in full operation, to interact with cyberspace computers through a communication network, thus forming the concept of cyber-physical systems (CPS). Besides, rapid advances in information and communications technologies provide tools for analysing data, in an increasingly rapid, autonomously, ubiquitous and in real time way, providing information that assists humans in making better decisions. In this sense, Prognostics and Health Management (PHM) of machines, is indicated as a promising application of Smart Manufacturing in the CPS contexto. Currently the PHM proposals found in the scientific literature are applied to specific cases and without a standardization of their implementation, preventing such approaches from being replicated in different manufacturing scenarios. Thus, the present work proposes the construction of an ontological model to assist in the implementation of the PHM in several manufacturing scenarios, to be harnessed in the future by software tools focused on intelligent manufacturing, standardizing their concepts, terms, and the form of data collection and processing. The DSR (Design Science Research) methodological approach is used to guide the development of the research. The construction of this ontological model, which integrates both the collected data and the necessary information for the decision making, allows the control of the estimate of a failure before it occurs in a more autonomous way. The main results of the ontological model are: a flexible ontology capable of being used on several types of mechanical machines of various types of manufacturing; the possibility of storing the knowledge contained in international standards, machine history of activities, and consolidated architectures in the context of the PHM, allows the constant updating of data depending on particularities that each productive process can contain, and finally, using the SPARQL language, it is given information that can be used for decisionmaking in timely maintenance interventions in the equipment of a real industry. The model is demonstrated considering the case of a centrifugal pump that proved its fidelity, integrity, level of detail, robustness and consistency, providing information previously fed by real data obtained in nearby companies.
228

Investigations on CPI Centric Worst Case Execution Time Analysis

Ravindar, Archana January 2013 (has links) (PDF)
Estimating program worst case execution time (WCET) is an important problem in the domain of real-time systems and embedded systems that are deadline-centric. If WCET of a program is found to exceed the deadline, it is either recoded or the target architecture is modified to meet the deadline. Predominantly, there exist three broad approaches to estimate WCET- static WCET analysis, hybrid measurement based analysis and statistical WCET analysis. Though measurement based analyzers benefit from knowledge of run-time behavior, amount of instrumentation remains a concern. This thesis proposes a CPI-centric WCET analyzer that estimates WCET as a product of worst case instruction count (IC) estimated using static analysis and worst case cycles per instruction (CPI) computed using a function of measured CPI. In many programs, it is observed that IC and CPI values are correlated. Five different kinds of correlation are found. This correlation enables us to optimize WCET from the product of worst case IC and worst case CPI to a product of worst case IC and corresponding CPI. A prime advantage of viewing time in terms of CPI, enables us to make use of program phase behavior. In many programs, CPI varies in phases during execution. Within each phase, the variation is homogeneous and lies within a few percent of the mean. Coefficient of variation of CPI across phases is much greater than within a phase. Using this observation, we estimate program WCET in terms of its phases. Due to the nature of variation of CPI within a phase in such programs, we can use a simple probabilistic inequality- Chebyshev inequality, to compute bounds of CPI within a desired probability. In some programs that execute many paths depending on if-conditions, CPI variation is observed to be high. The thesis proposes a PC signature that is a low cost way of profiling path information which is used to isolate points of high CPI variation and divides a phase into smaller sub-phases of lower CPI variation. Chebyshev inequality is applied to sub-phases resulting in much tighter bounds. Provision to divide a phase into smaller sub-phases based on allowable variance of CPI within a sub-phase also exists. The proposed technique is implemented on simulators and on a native platform. Other advantages of phases in the context of timing analysis are also presented that include parallelized WCET analysis and estimation of remaining worst case execution time for a particular program run.
229

Coverability and expressiveness properties of well-structured transition systems

Geeraerts, Gilles 20 April 2007 (has links)
Ces cinquante dernières annéees, les ordinateurs ont occupé une place toujours plus importante dans notre vie quotidienne. On les retrouve aujourd’hui présents dans de nombreuses applications, sous forme de systèmes enfouis. Ces applications sont parfois critiques, dans la mesure où toute défaillance du système informatique peut avoir des conséquences catastrophiques, tant sur le plan humain que sur le plan économique. <p>Nous pensons par exemple aux systèmes informatiques qui contrôlent les appareils médicaux ou certains systèmes vitaux (comme les freins) des véhicules automobiles. <p>Afin d’assurer la correction de ces systèmes informatiques, différentes techniques de vérification Assistée par Ordinateur ont été proposées, durant les trois dernières <p>décennies principalement. Ces techniques reposent sur un principe commun: donner une description formelle tant du système que de la propriété qu’il doit respecter, et appliquer une méthode automatique pour prouver que le système respecte la propriété. <p>Parmi les principaux modèles aptes à décrire formellement des systèmes informatiques, la classe des systèmes de transition bien structurés [ACJT96, FS01] occupe une place importante, et ce, pour deux raisons essentielles. Tout d’abord, cette classe généralise plusieurs autres classes bien étudiées et utiles de modèles à espace <p>d’états infini, comme les réseaux de Petri [Pet62](et leurs extensions monotones [Cia94, FGRVB06]) ou les systèmes communiquant par canaux FIFO avec pertes [AJ93]. Ensuite, des problèmes intéressants peuvent être résolus algorithmiquement sur cette classe. Parmi ces problèmes, on trouve le probléme de couverture, auquel certaines propriétés intéressantes de sûreté peuvent être réduites. <p>Dans la première partie de cette thèse, nous nous intéressons au problème de couverture. Jusqu’à présent, le seul algorithme général (c’est-à-dire applicable à n’importe quel système bien structuré) pour résoudre ce problème était un algorithme dit en arrière [ACJT96] car il calcule itérativement tous les états potentiellement non-sûrs et vérifie si l’état initial du système en fait partie. Nous proposons Expand, Enlarge and Check, le premier algorithme en avant pour résoudre le problème de couverture, qui calcule les états potentiellement accessibles du système et vérifie si certains d’entre eux sont non-sûrs. Cette approche est plus efficace en pratique, comme le montrent nos expériences. Nous présentons également des techniques permettant d’accroître l’efficacité de notre méthode dans le cas où nous analysons des réseaux de Petri (ou <p>une de leurs extensions monotones), ou bien des systèmes communiquant par canaux FIFO avec pertes. Enfin, nous nous intéressons au calcul de l’ensemble de couverture pour les réseaux de Petri, un objet mathématique permettant notamment de résoudre le problème de couverture. Nous étudions l’algorithme de Karp & Miller [KM69], une solution classique pour calculer cet ensemble. Nous montrons qu’une optimisation de cet algorithme présenté dans [Fin91] est fausse, et nous proposons une autre solution totalement neuve, et plus efficace que la solution de Karp & Miller. <p>Dans la seconde partie de la thèse, nous nous intéressons aux pouvoirs d’expression des systèmes bien structurés, tant en terme de mots infinis que de mots finis. Le pouvoir d’expression d’une classe de systèmes est, en quelque sorte, une mesure de la diversité des comportements que les modèles de cette classe peuvent représenter. En ce qui concerne les mots infinis, nous étudions les pouvoirs d’expression des réseaux de Petri et de deux de leurs extensions (les réseaux de Petri avec arcs non-bloquants et les réseaux de Petri avec arcs de transfert). Nous montrons qu’il existe une hiérarchie stricte entre ces différents pouvoirs d’expression. Nous obtenons également des résultats partiels concernant le pouvoir d’expression des réseaux de Petri avec arcs de réinitialisation. En ce qui concerne les mots finis, nous introduisons la classe des langages bien structurés, qui sont des langages acceptés par des systèmes de transition bien structurés étiquettés, où l’ensemble des états accepteurs est clos par le haut. Nous prouvons trois lemmes de pompage concernant ces langages. Ceux-ci nous permettent de réobtenir facilement des résultats classiques de la littérature, ainsi que plusieurs nouveaux résultats. En particulier, nous prouvons, comme dans le cas des mots infinis, qu’il existe une hiérarchie stricte entre les pouvoirs d’expression des extensions des réseaux de Petri considérées. / Doctorat en sciences, Spécialisation Informatique / info:eu-repo/semantics/nonPublished
230

From timed models to timed implementations

De Wulf, Martin 20 December 2006 (has links)
<p align="justify">Computer Science is currently facing a grand challenge :finding good design practices for embedded systems. Embedded systems are essentially computers interacting with some physical process. You could find one in a braking systems or in a nuclear power plant for example. They present several design difficulties :first they are reactive systems, interacting indefinitely with their environment. Second,they must satisfy real-time constraints specifying when they should respond, and not only how. Finally, their environment is often deeply continuous, presenting complex dynamics. The formal models of choice for specifying such systems are timed and hybrid automata for which model checking is pretty well studied.</p> <p><p align="justify">In a first part of this thesis, we study a complete design approach, including verification and code generation, for timed automata. We have to define a new semantics for timed automata, the AASAP semantics, that preserves the decidability properties for model checking and at the same time is implementable. Our notion of implementability is completely novel, and relies on the simulation of a semantics that is obviously implementable on a real platform. We wrote tools for the analysis and code generation and exemplify them on a case study about the well known Philips Audio Control Protocol.</p> <p><p align="justify">In a second part of this thesis, we study the problem of controller synthesis for an environment specified as a hybrid automaton. We give a new solution for discrete controllers having only an imperfect information about the state of the system. In the process, we defined a new algorithm, based on the monotonicity of the controllable predecessors operator, for efficiently finding a controller and we show some promising applications on a classical problem :the universality test for finite automata. / Doctorat en sciences, Spécialisation Informatique / info:eu-repo/semantics/nonPublished

Page generated in 0.0653 seconds