• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 5
  • 2
  • 2
  • 1
  • Tagged with
  • 10
  • 10
  • 5
  • 3
  • 3
  • 3
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
1

Control flow speculation for distributed architectures

Ranganathan, Nitya 21 October 2009 (has links)
As transistor counts, power dissipation, and wire delays increase, the microprocessor industry is transitioning from chips containing large monolithic processors to multi-core architectures. The granularity of cores determines the mechanisms for branch prediction, instruction fetch and map, data supply, instruction execution, and completion. Accurate control flow prediction is essential for high performance processors with large instruction windows and high-bandwidth execution. This dissertation considers cores with very large granularity, such as TRIPS, as well as cores with extremely small granularity, such as TFlex, and explores control flow speculation issues in such processors. Both TRIPS and TFlex are distributed block-based architectures and require control speculation mechanisms that can work in a distributed environment while supporting efficient block-level prediction, misprediction detection, and recovery. This dissertation aims at providing efficient control flow prediction techniques for distributed block-based processors. First, we discuss simple exit predictors inspired by branch predictors and describe the design of the TRIPS prototype block predictor. Area and timing trade-offs in the predictor implementation are presented. We report the predictor misprediction rates from the prototype chip for the SPEC benchmark suite. Next, we look at the performance bottlenecks in the prototype predictor and present a detailed analysis of exit and target predictors using basic prediction components inspired from branch predictors. This study helps in understanding what types of predictors are effective for exit and target prediction. Using the results of our prediction analysis, we propose novel hardware techniques to improve the accuracy of block prediction. To understand whether exit prediction is inherently more difficult than branch prediction, we measure the correlation among branches in basic blocks and hyperblocks and examine the loss in correlation due to hyperblock construction. Finally, we propose block predictors for TFlex, a fully distributed architecture that uses composable lightweight processors. We describe various possible designs for distributed block predictors and a classification scheme for such predictors. We present results for predictors from each of the design points for distributed prediction. / text
2

Etude de la migration de tâches dans une architecture multi-tuile. Génération automatique d'une solution basée sur des agents / Study of task migration in a multi-tiled architecture. Automatic generation of an agent based solution

Elantably, Ashraf 16 December 2015 (has links)
Les systèmes multiprocesseurs sur puce (MPSoC) mis en oeuvre dans les architecturesmulti-tuiles fournissent des solutions prometteuses pour exécuter des applicationssophistiquées et modernes. Une tuile contient au moins un processeur, unemémoire principale privée et des périphériques nécessaires associés à un dispositifchargé de la communication inter-tuile. Cependant, la fiabilité de ces systèmesest toujours un problème. Une réponse possible à ce problème est la migrationde tâches. Le transfert de l’exécution d’une tâche d’une tuile à l’autre permet degarder une fiabilité acceptable de ces systèmes. Nous proposons dans ce travail unetechnique de migration de tâches basée sur des agents. Cette technique vise lesapplications de flot de données en cours d’exécution sur des architectures multituiles.Une couche logicielle “middleware” est conçue pour supporter les agentsde migration. Cette couche rend la solution transparente pour les programmeursd’applications et facilite sa portabilité sur architectures multi-tuiles différentes. Afinque cette solution soit évolutive, une chaîne d’outils de génération automatique estconçue pour générer les agents de migration. Grâce à ces outils, ces informationssont extraites automatiquement des graphes de tâches et du placement optimisésur les tuiles du système. L’algorithme de migration est aussi détaillé, en montrantles phases successives et les transferts d’information nécessaires. La chaîne d’outilsest capable de générer du code pour les architectures ARM et x86. Cette techniquede migration de tâche peut être déployée sur les systèmes d’exploitation quine supportent ni chargement dynamique ni unité de gestion mémoire MMU. Lesrésultats expérimentaux sur une plateforme x86 matérielle et une plateforme ARMde simulation montrent peu de surcoût en terme de mémoire et de performance, cequi rend cette solution efficace. / Fully distributed memory multi-processors (MPSoC) implemented in multi-tiled architectures are promising solutions to support modern sophisticated applications, however, reliability of such systems is always an issue. As a result, a system-level solution like task migration keeps its importance. Transferring the execution of a task from one tile to another helps keep acceptable reliability of such systems. A tile contains at least one processor, private main memory and associated peripherals with a communication device responsible for inter-tile communications. We propose in this work an agent based task migration technique that targets data-flow applications running on multi-tiled architectures. This technique uses a middleware layer that makes it transparent to application programmers and eases its portability over different multi-tiled architectures. In order for this solution to be scalable to systems with more tiles, an automatic generation tool-chain is designed to generate migration agents and provide them with necessary information enabling them to execute migration processes properly. Such information is extracted automatically from application(s) task graphs and mapping on the system tiles. We show how agents are placed with applications and how such necessary information is generated and linked with them. The tool-chain is capable of generating code for ARM and x86 architectures. This task migration technique can be deployed on small operating systems that support neither MMU nor dynamic loading for task code. We show that this technique is operational on x86 based real hardware platform as well as on an ARM based simulation platform. Experimental results show low overhead both in memory and performance. Performance overhead due to migration of a task in a typical small application where it has one predecessor and one successor is 18.25%.
3

Linearizing and Distributing Engine Models for Control Design

Seitz, Timothy M. 13 September 2013 (has links)
No description available.
4

Architectures de diagnostic et de pronostic distribuées de systèmes techniques complexes de grande dimension / Distributed architectures for diagnosis and prognosis of large scale complex technical systems

Dievart, Mickaël 03 December 2010 (has links)
Dans ce mémoire, différentes architectures pour le contrôle et la surveillance des systèmes techniques complexes de grande dimension (STCGD) sont discutées. Les problématiques de maintenance conditionnelle et d'évaluation de l'état de santé sont définies. Les types de diagnostic et de pronostic sont présentés afin d'aboutir à une évaluation de l'état de santé des STCGD. Les études relatives au diagnostic décentralisé sont discutées puis les apports des NTIC et des technologies distribuées au diagnostic sont présentés. Par la suite, le diagnostic distribué et les travaux relatifs à ce mode de déploiement sont introduits. Les limites des approches centralisées et décentralisées du diagnostic sont présentées et confrontées à l'apport des approches distribuées. Les informations et/ou les connaissances supports aux diagnostic et au pronostic ainsi que leur modélisation afin de les exploiter sont décrites et formalisées. Une caractérisation des statuts que peut prendre un composant est proposée. Il est décrit les pré-requis nécessaires pour la couche de surveillance des STCGD et les principes du diagnostic et du pronostic sont ensuite présentés sous la forme de différents algorithmes. Enfin, une méthode d'évaluation de l'état de santé des STCGD est proposée. Plusieurs déploiements peuvent être envisagés pour l'évaluation de la santé des STCGD. Une plateforme de simulation a été développée pour évaluer les performances des déploiements centralisés et distribués. La plateforme a eu pour but de se comporter comme la couche de surveillance d'un STCGD. Un cas d'étude paramétrable est proposé pour chacun des deux déploiements et leurs performances sont comparées. / In this dissertation, various architectures for the control and the monitoring of Large Scale Complex Technical Systems (LSCTS) are discussed. The problematic of condition-based maintenance and health status assessment is defined. A diagnostic and prognostic typology is presented leading to the assessment of the health status of LSCTSs. Decentralized diagnosis studies are discussed then the contributions of the ICT and of the distributed technologies for the diagnosis are presented. Thereafter, the distributed diagnosis and works relative to this kind of deployments are introduced. The limits of the centralized and decentralized diagnosis approaches are presented. Then the centralized approaches are compared to the distributed ones. Information and/or knowledge that support the diagnosis and the prognosis as well as their modeling in order to exploit them are described and formalized. A characterization is proposed for the different status of a component can be in. Requirements are described for the monitoring layer of the LSCTSs are described in order to implement the proposed diagnosis and prognosis principles that are then specified by the means of algorithms. Eventually, a health assessment method of the LSCTSs is also proposed. Several deployments can be considered to implement the health assessment of the LSCTSs. A simulation platform, which was developed to evaluate the performances of the centralized and the distributed deployments, is presented. Among the purposes of the platform, one is to behave as the monitoring layer of a LSCTS. A use case is proposed for two deployments and their performances are compared.
5

Σχεδιασμός και ανάπτυξη διεπαφής πελάτη-εξυπηρετητή για υποστήριξη συλλογισμού σε κατανεμημένες εφαρμογές του σημαντικού ιστού

Αγγελόπουλος, Παναγιώτης 21 September 2010 (has links)
Η έρευνα αναφορικά με την εξέλιξη του Παγκόσμιου Ιστού (WWW) κινείται τα τελευταία χρόνια προς πιο ευφυείς και αυτοματοποιημένους τρόπους ανακάλυψης και εξαγωγής της πληροφορίας. Ο Σημαντικός Ιστός (Semantic Web) είναι μία επέκταση του σημερινού Ιστού, όπου στην πληροφορία δίνεται σαφώς προσδιορισμένη σημασία, δίνοντας έτσι τη δυνατότητα στις μηχανές να μπορούν πλέον να επεξεργάζονται καλύτερα και να «κατανοούν» τα δεδομένα, τα οποία μέχρι σήμερα απλώς παρουσιάζουν. Για να λειτουργήσει ο Σημαντικός Ιστός, οι υπολογιστές θα πρέπει να έχουν πρόσβαση σε οργανωμένες συλλογές πληροφοριών, που καλούνται οντολογίες (ontologies). Οι οντολογίες παρέχουν μια μέθοδο αναπαράστασης της γνώσης στο Σημαντικό Ιστό και μπορούν επομένως να αξιοποιηθούν από τα υπολογιστικά συστήματα για τη διεξαγωγή αυτοματοποιημένου συλλογισμού (automated reasoning). Για την περιγραφή και την αναπαράσταση των οντολογιών του Σημαντικού Ιστού σε γλώσσες αναγνώσιμες από τη μηχανή, έχουν προταθεί και βρίσκονται υπό εξέλιξη διάφορες πρωτοβουλίες, με πιο σημαντική τη Γλώσσα Οντολογίας Ιστού (Web Ontology Language – OWL). H γλώσσα αυτή αποτελεί πλέον τη βάση για την αναπαράσταση γνώσης στο Σημαντικό Ιστό, λόγω της προώθησής της από το W3C, και του αυξανόμενου βαθμού υιοθέτησής της στις σχετικές εφαρμογές. Το βασικότερο εργαλείο για την υλοποίηση εφαρμογών που διαχειρίζονται OWL οντολογίες, είναι το OWL API. Το OWL API αποτελείται από προγραμματιστικές βιβλιοθήκες και μεθόδους, οι οποίες παρέχουν μια υψηλού επιπέδου διεπαφή για την πρόσβαση και τον χειρισμό OWL οντολογιών. Το θεωρητικό υπόβαθρο που εγγυάται την εκφραστική και συλλογιστική ισχύ των οντολογιών, παρέχεται από τις Λογικές Περιγραφής (Description Logics). Οι Λογικές Περιγραφής αποτελούν ένα καλώς ορισμένο αποφασίσιμο υποσύνολο της Λογικής Πρώτης Τάξης και καθιστούν δυνατή την αναπαράσταση και ανακάλυψη γνώσης στο Σημαντικό Ιστό. Για την ανακάλυψη άρρητης πληροφορίας ενδείκνυται, επομένως, να αξιοποιηθούν συστήματα βασισμένα σε Λογικές Περιγραφής. Τα συστήματα αυτά ονομάζονται και εργαλεία Συλλογισμού (Reasoners). Χαρακτηριστικά παραδείγματα τέτοιων εργαλείων αποτελούν τα FaCT++ και Pellet. Από τα παραπάνω γίνεται προφανής ο λόγος για τον οποίο, τόσο το OWL API, όσο και τα εργαλεία Συλλογισμού, χρησιμοποιούνται από προτεινόμενα μοντέλα υλοποίησης εφαρμογών του Σημαντικού Ιστού επόμενης γενιάς (WEB 3.0), για την επικοινωνία και την υποβολή «έξυπνων» ερωτημάτων σε βάσεις γνώσης (knowledge bases). Στα μοντέλα αυτά προτείνεται, επίσης, η χρήση κατανεμημένης αρχιτεκτονικής 3-επιπέδων (3-tier distributed architecture), για την υλοποίηση εφαρμογών του Σημαντικού Ιστού. Σκοπός της διπλωματικής αυτής είναι ο σχεδιασμός και η ανάπτυξη μιας διεπαφής Πελάτη – Εξυπηρετητή (Client – Server interface) για την υποστήριξη υπηρεσιών Συλλογισμού (reasoning) σε κατανεμημένες εφαρμογές του Σημαντικού Ιστού. Πιο συγκεκριμένα, η διεπαφή που θα υλοποιήσουμε αποτελείται από δύο μέρη. Το πρώτο παρέχει τα απαραίτητα αρχεία για την εκτέλεση ενός εργαλείου Συλλογισμού σε κάποιο απομακρυσμένο μηχάνημα (Server). Με τον τρόπο αυτό, το συγκεκριμένο μηχάνημα θα παρέχει απομακρυσμένες (remote) υπηρεσίες Συλλογισμού. Το δεύτερο μέρος (Client) περιέχει αρχεία, που δρουν συμπληρωματικά στις βιβλιοθήκες του OWL API, και του δίνουν νέες δυνατότητες. Συγκεκριμένα, δίνουν την δυνατότητα σε μια εφαρμογή, που είναι υλοποιημένη με το OWL API, να χρησιμοποιήσει τις υπηρεσίες που προσφέρονται από κάποιο απομακρυσμένο εργαλείο Συλλογισμού. Συνεπώς, η διεπαφή μας θα δώσει την δυνατότητα υιοθέτησης της χρήσης του OWL API και των εργαλείων Συλλογισμού από κατανεμημένες αρχιτεκτονικές για την υλοποίηση εφαρμογών του Σημαντικού Ιστού. / In the past few years, the research that focus on the development of the World Wide Web (WWW) has moved towards more brilliant and automated ways of discovering and exporting the information. The Semantic Web is an extension of the current Web, that explicitly defines the information, thus providing the machines with the possibility to better process and “comprehend” the data, which until now they simply present. For the Semantic Web to function properly, computers must have access to organized collections of information, that are called ontologies. Ontologies provide a method of representing knowledge in the Semantic Web and, consequently, they can be used by computing systems in order to conduct automated reasoning. In order to describe and represent the ontologies of the Semantic Web in machine-readable language, various initiatives have been proposed and are under development, most important of which is the Web Ontology Language - OWL. This language constitutes the base for representing knowledge in the Semantic Web, due to its promotion from the W3C, and its increasing degree of adoption from relative applications. The main tool for the development of applications that manages OWL ontologies, is the OWL API. The OWL API consists of programming libraries and methods, that provide a higher-level interface for accessing and handling OWL ontologies. The theoretical background that guarantees the expressivity and the reasoning of ontologies, is provided from Description Logics. Description Logics constitute a well defined and decidable subset of First Order Logic and make possible the representation and discovery of knowledge in the Semantic Web. As a consequence, in order to discover “clever” information, we have to develop and use systems that are based in Description Logics. These systems are also called Reasoners. Characteristic examples of such tools are FaCT++ and Pellet. From above, it must be obvious why both the OWL API and the Reasoners are used by proposed models of developing next generation (WEB 3.0) Semantic Web applications, for the communication and the submission of “intelligent” questions in knowledge bases. These models also propose the use of a 3-level distributed architecture (3-tier distributed architecture), for the development of Semantic Web applications. Aim of this diploma thesis is to design and implement a Client-Server interface to support Reasoning in distributed applications of the Semantic Web. Specifically, the interface that we will implement consists of two parts. First part provides the essential files for a Reasoner to run in a remote machine (Server). As a result, this machine will provide remote Reasoning services. Second part (Client) contains files, that act additionally to (enhance) the libraries of the OWL API, and give them new features. More precisely, they provide an application, that is implemented with OWL API, with the possibility of using the services that are offered by a remote Reasoner. Consequently, our interface will make possible the use of the OWL API and the Reasoners from proposed distributed architectures for the development of Semantic Web applications.
6

Architectures innovantes de systèmes de commandes de vol / Innovative Architectures of Flight Control Systems

Sghairi Haouati, Manel 27 May 2010 (has links)
L'aboutissement aux Commandes de Vol Électriques (CDVE) des avions civils actuels s'est fait par étapes, après une longue maturation des différentes technologies mises en place. La prochaine étape est l'utilisation de communications intégralement numériques et d'actionneurs intelligents. Cette thèse propose de nouvelles architectures, en rupture avec l'état de l'art, avec de nouvelles répartitions des fonctions intelligentes entre l'avionique centrale (calculateurs de commandes de vols) et l'avionique déportée (électroniques locales des actionneurs) dont l'avantage est d'exiger moins de ressources par rapport aux architectures conventionnelles tout en satisfaisant les mêmes exigences de sécurité et de disponibilité ainsi que les exigences croissantes en fiabilité opérationnelle de la part des compagnies aériennes. La sûreté de fonctionnement et la robustesse des nouvelles architectures proposées ont été validées respectivement sous OCAS/Altarica et Matlab/Simulink. / The current civil aircraft's electrical flight control has been changed to take benefit of technical improvements. New technologies, when mature, can be incorporated in aircrafts. Evolutions are considered towards a digital communication and intelligent actuators. This thesis is aiming at proposing alternative architectures with distribution of system functionality between flight control computers and actuators with less hardware and software resources. New architectures must meet the same safety and availability requirements with additional operational reliability (required by airlines). Dependability and robustness of new architectures have been validated trough respectively OCAS / AltaRica and Matlab / Simulink
7

Support des communications dans des architectures multicœurs par l’intermédiaire de mécanismes matériels et d’interfaces de programmation standardisées / Communication support in multi-core architectures through hardware mechanisms and standardized programming interfaces

Rosa, Thiago Raupp da 08 April 2016 (has links)
L’évolution des contraintes applicatives imposent des améliorations continues sur les performances et l’efficacité énergétique des systèmes embarqués. Pour répondre à ces contraintes, les plateformes « SoC » actuelles s’appuient sur la multiplication des cœurs de calcul, tout en ajoutant des accélérateurs matériels dédiés pour gérer des tâches spécifiques. Dans ce contexte, développer des applications embarquées devient un défi complexe, en effet la charge de travail des applications continue à croître alors que les technologies logicielles n’évoluent pas aussi vite que les architectures matérielles, laissant un écart dans la conception complète du système. De fait, la complexité accrue de programmation peut être associée à l’absence de standards logiciels qui prennent en charge l’hétérogénéité des architectures, menant souvent à des solutions ad hoc. A l’opposé, l’utilisation d’une solution logicielle standardisée pour les systèmes embarqués peut induire des surcoûts importants concernant les performances et l’occupation de la mémoire si elle n’est pas adaptée à l’architecture. Par conséquent, le travail de cette thèse se concentre sur la réduction de cet écart en mettant en œuvre des mécanismes matériels dont la conception prend en compte une interface de programmation standard pour systèmes embarqués. Les principaux objectifs sont ainsi d’accroître la programmabilité par la mise en œuvre d’une interface de programmation : MCAPI, et de diminuer la charge logiciel des cœurs grâce à l’utilisation des mécanismes matériels développés.Les contributions de la thèse comprennent la mise en œuvre de MCAPI pour une plate-forme multicœur générique et des mécanismes matériels pour améliorer la performance globale de la configuration de la communication et des transferts de données. Il est démontré que les mécanismes peuvent être pris en charge par les interfaces logicielles sans augmenter leur complexité. En outre, les résultats de performance obtenus en utilisant un modèle SystemC/TLM de l’architecture multicœurs de référence montrent que les mécanismes proposés apportent des gains significatifs en termes de latence, débit, trafic réseau, temps de charge processeur et temps de communication sur des cas d’étude et des applications complètes. / The application constraints driving the design of embedded systems are constantly demanding higher performance and power efficiency. To meet these constraints, current SoC platforms rely on replicating several processing cores while adding dedicated hardware accelerators to handle specific tasks. However, developing embedded applications is becoming a key challenge, since applications workload will continue to grow and the software technologies are not evolving as fast as hardware architectures, leaving a gap in the full system design. Indeed, the increased programming complexity can be associated to the lack of software standards that supports heterogeneity, frequently leading to custom solutions. On the other hand, implementing a standard software solution for embedded systems might induce significant performance and memory usage overheads. Therefore, this Thesis focus on decreasing this gap by implementing hardware mechanisms in co-design with a standard programming interface for embedded systems. The main objectives are to increase programmability through the implementation of a standardized communication application programming interface (MCAPI), and decrease the overheads imposed by the software implementation through the use of the developed hardware mechanisms.The contributions of the Thesis comprise the implementation of MCAPI for a generic multi-core platform and dedicated hardware mechanisms to improve communication connection phase and overall performance of data transfer phase. It is demonstrated that the proposed mechanisms can be exploited by the software implementation without increasing software complexity. Furthermore, performance estimations obtained using a SystemC/TLM simulation model for the reference multi-core architecture show that the proposed mechanisms provide significant gains in terms of latency (up to 97%), throughput (40x increase) and network traffic (up to 68%) while reducing processor workload for both characterization test-cases and real application benchmarks.
8

Analyse Structurelle pour le Diagnostic des Systèmes Distribués / Structural analysis for the diagnosis of distributed systems

Perez zuniga, Carlos gustavo 21 August 2017 (has links)
Les récents développements des systèmes technologiques ont mené à une complexification des comportements des systèmes actuels. Une solution pour gérer cette complexité croissante consiste à les considérer comme un ensemble de sous-systèmes hétérogènes et à développer des techniques distribuées pour les contrôler et les gérer. Cette solution soulève plusieurs problèmes. Tout d’abord, l’augmentation de la taille et du nombre de composants entraîne immanquablement l’augmentation du nombre de fautes qui peuvent conduire le système dans un état de défaillance critique. De fait, parmi les fonctions opérationnelles, les tâches de détection et d’isolation des fautes (Fault Detection and Isolation ou FDI), de maintenance et de réparation sont devenues prédominantes et elles influent considérablement sur le coût total des produits finaux.Cette thèse porte sur la détection et l’isolation de fautes. Parmi les différentes méthodes pour générer des tests de diagnostic utilisant la redondance analytique, cette thèse adopte l’approche par espace de parité qui utilise les relations de redondance analytique (RRA). Étant donné un modèle du système sous la forme d’un ensemble d’équations différentielles, les RRA sont des relations obtenues à partir du modèle en éliminant les variables non mesurées. Ceci peut être effectué dans un cadre analytique en utilisant la théorie de l’élimination. Une autre solution consiste à utiliser l’analyse structurelle. L’analyse structurelle est basée sur une abstraction du modèle qui ne conserve que les liens entre variables et équations. Malgré son apparente simplicité, l’analyse structurelle fournit un ensemble d’outils puissants, s’appuyant sur la théorie des graphes, pour analyser et inférer des informations sur le système. Par ailleurs, elle a l’avantage de s’appliquer indifféremment sur les systèmes linéaires ou non linéaires.L’objectif de cette thèse est de développer des techniques efficaces basées sur l’analyse structurelle pour le diagnostic des systèmes continus distribué. Dans ce cadre, le système se décompose en un ensemble de sous-systèmes en fonction de contraintes fonctionnelles, géographiques ou de confidentialité. La thèse se divise principalement en deux parties :• la première partie cherche à mettre à lumière, à partir des modèles structurels obtenus au niveau des sous-systèmes, les redondances qui généreront des tests de diagnostic pertinents au niveau du système global,• la deuxième partie vise à formuler et résoudre le problème d’optimisation lié au choix d’un sous-ensemble de tests de diagnostic au niveau des sous-systèmes permettant une diagnosticabilité maximale pour le système global.La première partie utilise le concept d’ensemble minimal structurellement surdéterminé guidé par les fautes (Fault-Driven Minimal Structurally Overdetermined Set ou FMSO set). Ce concept est introduit dans la thèse. Il s’agit d’un sousensemble d’équations du modèle avec une redondance minimale à partir de laquelle une RRA sensible à un ensemble de fautes peut être obtenu. Deux solutions pour générer des ensembles FMSO pour le système global sont présentées, d’une part dans un cadre décentralisé avec des superviseurs imbriqués suivant une hiérarchie; d’autre part dans un cadre totalement distribué. Ces solutions sont basées sur les propriétés des ensembles FMSO au niveau des sous-systèmes qui sont présentées dans la thèse. La deuxième partie pose un problème d’optimisation dans le cadre d’une recherche heuristique et propose trois solutions basées sur un algorithme A* itératif combiné avec une fonction capable d’évaluer si un ensemble FMSO au niveau global peut être obtenu à partir des ensembles FMSO locaux sélectionnés. Les concepts introduits dans la thèse et les résultats sont appliqués à deux cas d’étude industriels. Le premier est une usine de désalinisation. Le second est un système de détermination et de contrôle d’attitude pour un satellite en orbite basse. / The recent development of technological systems implies a high complexity of behaviors for today’s systems. An answer to the increased system’s complexity is to look at them as a multitude of heterogeneous subsystems and develop distributed techniques to control and manage them. This raises a number of problems. Firstly, as the size and number of components increase, so does the number of fault occurrences that may drive the system to undergo critical failures. Fault detection and isolation (FDI), maintenance and repair are an increasing part of the operational everyday’s tasks and they impact drastically the total cost of final products.This thesis focuses on fault detection and isolation. Among the different methods to generate diagnosis tests by taking advantage of analytical redundancy, this thesis adopts the so-called parity space approach based on analytical redundancy relations (ARRs). Given a model of the system in the form of a set of differential equations, ARRs are relations that are obtained from the model by eliminating non measured variables. This can be performed in an analytical framework using elimination theory but another way of doing this is to use structural analysis. Structural analysis is based on a structural abstraction of the model that only retains a representation of which variables are involved in which equations. Despite the rusticity of the abstract model, structural analysis provides a set of powerful tools, relying on graph theory, to analyze and infer information about the system. Interestingly, it applies indifferently to linear or nonlinear systems. The goal of this thesis is to develop effective techniques based on structural analysis for diagnosis of distributed continuous systems. In this framework, the system is decomposed into a set of subsystems according to functional, geographical or privacy constraints. The thesis is organized in two parts:• highlighting the redundancies that are built into the global structural model and that can be used to generate diagnosis tests starting from the redundancies existing in the subsystem’s models,• formulating and solving the optimization problem linked to the choice of a subset of diagnosis tests at the subsystems level that can lead to a set of diagnosis tests achieving maximum diagnosability for the global system.The first part takes benefit of the concept of Fault-Driven Minimal Structurally Overdetermined Set (FMSO set) that is introduced in the thesis. An FMSO set determines a subset of equations of the model with minimal redundancy from which an ARR sensitive to a set of faults can be obtained. Two solutions for generating FMSOs for the global system are presented, in a decentralized framework with supervisors at each level of a hierarchy and in a totally distributed framework.These are based on the properties of the FMSO sets for the subsystems in relation to those of the global system derived in the thesis.
9

Um modelo de autorização contextual para o controle de acesso ao prontuário eletrônico do paciente em ambientes abertos e distribuídos. / A contextual authorization model for access control of electronic patient record in open distributed environments.

Motta, Gustavo Henrique Matos Bezerra 05 February 2004 (has links)
Os recentes avanços nas tecnologias de comunicação e computação viabilizaram o pronto acesso às informações do prontuário eletrônico do paciente (PEP). O potencial de difusão de informações clínicas resultante suscita preocupações acerca da priva-cidade do paciente e da confidencialidade de seus dados. As normas presentes na legislação dispõem que o conteúdo do prontuário deve ser sigiloso, não cabendo o acesso a ele sem a prévia autorização do paciente, salvo quando necessário para be-neficiá-lo. Este trabalho propõe o MACA, um modelo de autorização contextual para o controle de acesso baseado em papéis (CABP) que contempla requisitos de limita-ção de acesso ao PEP em ambientes abertos e distribuídos. O CABP regula o acesso dos usuários ao PEP com base nas funções (papéis) que eles exercem numa organi-zação. Uma autorização contextual usa informações ambientais disponíveis durante o acesso para decidir se um usuário tem o direito e a necessidade de acessar um re-curso do PEP. Isso confere ao MACA flexibilidade e poder expressivo para estabele-cer políticas de acesso ao PEP e políticas administrativas para o CABP que se adap-tam à diversidade ambiental e cultural das organizações de saúde. O MACA ainda permite que os componentes do PEP utilizem o CABP de forma transparente para o usuário final, tornando-o mais fácil de usar quando comparado a outros modelos de CABP. A arquitetura onde a implementação do MACA foi integrada adota o serviço de diretórios LDAP (Lightweight Directory Access Protocol), a linguagem de pro-gramação Java e os padrões CORBA Security Service e Resource Access Decision Fa-cility. Com esses padrões abertos e distribuídos, os componentes heterogêneos do PEP podem solicitar serviços de autenticação de usuário e de autorização de acesso de modo unificado e coerente a partir de múltiplas plataformas. A implementação do MACA ainda tem a vantagem de ser um software livre, de basear-se em componen-tes de software sem custos de licenciamento e de apresentar bom desempenho para as demandas de acesso estimadas. Por fim, a utilização rotineira do MACA no con-trole de acesso ao PEP do InCor-HC.FMUSP, por cerca de 2000 usuários, evidenciam a exeqüibilidade do modelo, da sua implementação e da sua aplicação prática em casos reais. / The recent advances in computing and communication technologies allowed ready access to the electronic patient record (EPR) information. High availability of clinical information raises concerns about patients privacy and data confidentiality of their data. The legal regulation mandates the confidentiality of EPR contents. Everyone has to be authorized by the patients to access their EPR, except when this access is necessary to provide care on their behalf. This work proposes MACA, a contextual authorization model for the role-based access control (RBAC) that considers the ac-cess restrictions requirements for the EPR in open and distributed environments. RBAC regulates user’s access to EPR based on organizational functions (roles). Con-textual authorizations use environmental information available at access time, like user/patient relationship, in order to decide whether a user is allowed to access an EPR resource. This gives flexibility and expressive power to MACA, allowing one to establish access policies for the EPR and administrative policies for the RBAC that considers the environmental and cultural diversity of healthcare organizations. MACA also allows EPR components to use RBAC transparently, making it more user friendly when compared with other RBAC models. The implementation of MACA architecture uses the LDAP (Lightweight Directory Access Protocol) directory server, the Java programming language and the standards CORBA Security Service and Re-source Access Decision Facility. Thus, heterogeneous EPR components can request user authentication and access authorization services in a unified and coherent way across multiple platforms. MACA implementation complies with free software pol-icy. It is based on software components without licensing costs and it offers good performance for the estimated access demand. Finally, the daily use of MACA to control the access of about 2000 users to the EPR at InCor-HC.FMUSP shows the feasibility of the model, of its implementation and the effectiveness of its practical application on real cases.
10

Um modelo de autorização contextual para o controle de acesso ao prontuário eletrônico do paciente em ambientes abertos e distribuídos. / A contextual authorization model for access control of electronic patient record in open distributed environments.

Gustavo Henrique Matos Bezerra Motta 05 February 2004 (has links)
Os recentes avanços nas tecnologias de comunicação e computação viabilizaram o pronto acesso às informações do prontuário eletrônico do paciente (PEP). O potencial de difusão de informações clínicas resultante suscita preocupações acerca da priva-cidade do paciente e da confidencialidade de seus dados. As normas presentes na legislação dispõem que o conteúdo do prontuário deve ser sigiloso, não cabendo o acesso a ele sem a prévia autorização do paciente, salvo quando necessário para be-neficiá-lo. Este trabalho propõe o MACA, um modelo de autorização contextual para o controle de acesso baseado em papéis (CABP) que contempla requisitos de limita-ção de acesso ao PEP em ambientes abertos e distribuídos. O CABP regula o acesso dos usuários ao PEP com base nas funções (papéis) que eles exercem numa organi-zação. Uma autorização contextual usa informações ambientais disponíveis durante o acesso para decidir se um usuário tem o direito e a necessidade de acessar um re-curso do PEP. Isso confere ao MACA flexibilidade e poder expressivo para estabele-cer políticas de acesso ao PEP e políticas administrativas para o CABP que se adap-tam à diversidade ambiental e cultural das organizações de saúde. O MACA ainda permite que os componentes do PEP utilizem o CABP de forma transparente para o usuário final, tornando-o mais fácil de usar quando comparado a outros modelos de CABP. A arquitetura onde a implementação do MACA foi integrada adota o serviço de diretórios LDAP (Lightweight Directory Access Protocol), a linguagem de pro-gramação Java e os padrões CORBA Security Service e Resource Access Decision Fa-cility. Com esses padrões abertos e distribuídos, os componentes heterogêneos do PEP podem solicitar serviços de autenticação de usuário e de autorização de acesso de modo unificado e coerente a partir de múltiplas plataformas. A implementação do MACA ainda tem a vantagem de ser um software livre, de basear-se em componen-tes de software sem custos de licenciamento e de apresentar bom desempenho para as demandas de acesso estimadas. Por fim, a utilização rotineira do MACA no con-trole de acesso ao PEP do InCor-HC.FMUSP, por cerca de 2000 usuários, evidenciam a exeqüibilidade do modelo, da sua implementação e da sua aplicação prática em casos reais. / The recent advances in computing and communication technologies allowed ready access to the electronic patient record (EPR) information. High availability of clinical information raises concerns about patients privacy and data confidentiality of their data. The legal regulation mandates the confidentiality of EPR contents. Everyone has to be authorized by the patients to access their EPR, except when this access is necessary to provide care on their behalf. This work proposes MACA, a contextual authorization model for the role-based access control (RBAC) that considers the ac-cess restrictions requirements for the EPR in open and distributed environments. RBAC regulates user’s access to EPR based on organizational functions (roles). Con-textual authorizations use environmental information available at access time, like user/patient relationship, in order to decide whether a user is allowed to access an EPR resource. This gives flexibility and expressive power to MACA, allowing one to establish access policies for the EPR and administrative policies for the RBAC that considers the environmental and cultural diversity of healthcare organizations. MACA also allows EPR components to use RBAC transparently, making it more user friendly when compared with other RBAC models. The implementation of MACA architecture uses the LDAP (Lightweight Directory Access Protocol) directory server, the Java programming language and the standards CORBA Security Service and Re-source Access Decision Facility. Thus, heterogeneous EPR components can request user authentication and access authorization services in a unified and coherent way across multiple platforms. MACA implementation complies with free software pol-icy. It is based on software components without licensing costs and it offers good performance for the estimated access demand. Finally, the daily use of MACA to control the access of about 2000 users to the EPR at InCor-HC.FMUSP shows the feasibility of the model, of its implementation and the effectiveness of its practical application on real cases.

Page generated in 0.5162 seconds