Spelling suggestions: "subject:"net"" "subject:"neto""
321 |
Managing Changes to Service Oriented EnterprisesAkram, Mohammad Salman 07 July 2005 (has links)
In this thesis, we present a framework for managing changes in service oriented enterprises (SOEs). A service oriented enterprise outsources and composes its functionality from third-party Web service providers. We focus on changes initiated or triggered by these member Web services. We present a taxonomy of changes that occur in service oriented enterprises. We use a combination of several types of Petri nets to model the triggering changes and ensuing reactive changes. The techniques presented in our thesis are implemented in WebBIS, a prototype for composing and managing e-business Web services. Finally, we conduct an extensive simulation study to prove the feasibility of the proposed techniques. / Master of Science
|
322 |
Spiking neural P systems: matrix representation and formal verificationGheorghe, Marian, Lefticaru, Raluca, Konur, Savas, Niculescu, I.M., Adorna, H.N. 28 April 2021 (has links)
Yes / Structural and behavioural properties of models are very important in development of complex systems and applications. In this paper, we investigate such properties for some classes of SN P systems. First, a class of SN P systems associated to a set of routing problems are investigated through their matrix representation. This allows to make certain connections amongst some of these problems. Secondly, the behavioural properties of these SN P systems are formally verified through a natural and direct mapping of these models into kP systems which are equipped with adequate formal verification methods and tools. Some examples are used to prove the effectiveness of the verification approach. / EPSRC research grant EP/R043787/1; DOST-ERDT research grants; Semirara Mining Corp; UPD-OVCRD;
|
323 |
A Verification Framework for Component Based Modeling and Simulation : “Putting the pieces together”Mahmood, Imran January 2013 (has links)
The discipline of component-based modeling and simulation offers promising gains including reduction in development cost, time, and system complexity. This paradigm is very profitable as it promotes the use and reuse of modular components and is auspicious for effective development of complex simulations. It however is confronted by a series of research challenges when it comes to actually practice this methodology. One of such important issue is Composability verification. In modeling and simulation (M&S), composability is the capability to select and assemble components in various combinations to satisfy specific user requirements. Therefore to ensure the correctness of a composed model, it is verified with respect to its requirements specifications.There are different approaches and existing component modeling frameworks that support composability however in our observation most of the component modeling frameworks possess none or weak built-in support for the composability verification. One such framework is Base Object Model (BOM) which fundamentally poses a satisfactory potential for effective model composability and reuse. However it falls short of required semantics, necessary modeling characteristics and built-in evaluation techniques, which are essential for modeling complex system behavior and reasoning about the validity of the composability at different levels.In this thesis a comprehensive verification framework is proposed to contend with some important issues in composability verification and a verification process is suggested to verify composability of different kinds of systems models, such as reactive, real-time and probabilistic systems. With an assumption that all these systems are concurrent in nature in which different composed components interact with each other simultaneously, the requirements for the extensive techniques for the structural and behavioral analysis becomes increasingly challenging. The proposed verification framework provides methods, techniques and tool support for verifying composability at its different levels. These levels are defined as foundations of a consistent model composability. Each level is discussed in detail and an approach is presented to verify composability at that level. In particular we focus on theDynamic-Semantic Composability level due to its significance in the overallcomposability correctness and also due to the level of difficulty it poses in theprocess. In order to verify composability at this level we investigate the application ofthree different approaches namely (i) Petri Nets based Algebraic Analysis (ii) ColoredPetri Nets (CPN) based State-space Analysis and (iii) Communicating SequentialProcesses based Model Checking. All the three approaches attack the problem ofverifying dynamic-semantic composability in different ways however they all sharethe same aim i.e., to confirm the correctness of a composed model with respect to itsrequirement specifications. Beside the operative integration of these approaches inour framework, we also contributed in the improvement of each approach foreffective applicability in the composability verification. Such as applying algorithmsfor automating Petri Net algebraic computations, introducing a state-space reductiontechnique in CPN based state-space analysis, and introducing function libraries toperform verification tasks and help the molder with ease of use during thecomposability verification. We also provide detailed examples of using each approachwith different models to explain the verification process and their functionality.Lastly we provide a comparison of these approaches and suggest guidelines forchoosing the right one based on the nature of the model and the availableinformation. With a right choice of an approach and following the guidelines of ourcomponent-based M&S life-cycle a modeler can easily construct and verify BOMbased composed models with respect to its requirement specifications. / <p>Overseas Scholarship for PHD in selected Studies Phase II Batch I</p><p>Higher Education Commision of Pakistan.</p><p>QC 20130224</p>
|
324 |
Hétérogénéité des neutrophiles dans l’asthme équinHerteman, Nicolas 08 1900 (has links)
Les granulocytes de faible densité (LDGs) sont un sous-type de neutrophiles mis en évidence initialement dans le sang de patients atteints de différentes maladies telles que le lupus érythémateux systémique ou le psoriasis. Cependant, des études rapportent également leur présence chez des individus sains. On connait mal à ce jour les caractéristiques des LDGs, notamment en ce qui a trait à leur profil inflammatoire. De plus, leur biogenèse demeure toujours mal connue.
Mes travaux de maîtrise visaient à comparer les propriétés des LDGs à celles des neutrophiles de densité normale (NDNs). Pour ce faire, 8 chevaux atteints d’asthme équin sévère et 11 chevaux sains bien caractérisés ont été sélectionnés pour l’étude et sur lesquels des isolations de NDNs ainsi que des LDGs ont été réalisées. La morphologie des neutrophiles a ensuite été évaluée par microscopie optique. Le contenu en myéloperoxidase, un composant des granules primaires azurophiles des neutrophiles, et la présence de récepteurs du N-formylméthionine-leucyl-phénylalanine (fMLP-R) ont été évalués par cytométrie de flux et immunofluorescence, respectivement. Enfin, la capacité fonctionnelle de ces cellules à produire spontanément des pièges extracellulaires des neutrophiles (NETs) a été étudiée in vitro par microscopie confocale.
Les résultats démontrent que le nombre de LDGs est augmenté dans le sang des chevaux asthmatiques lors d'exacerbation de la maladie. De plus, ces cellules présentent une morphologie différente puisqu’elles sont de taille plus petite et contiennent plus de fMLP-R que les NDNs. Le contenu en myéloperoxidase est cependant similaire dans les deux populations de neutrophiles. Enfin, les LDGs produisent plus de NETs, et sont plus sensibles aux stimuli activateurs que les NDNs.
Ces caractéristiques sont similaires dans les 2 groupes de chevaux suggérant ainsi que ce sont des propriétés intrinsèques des LDGs et qu’ils représentent une population cellulaire préactivée et qui de plus, est majoritairement mature. Cette étude caractérise et compare pour la première fois les LDGs chez des animaux sains et ceux retrouvés chez des animaux atteints d’une maladie inflammatoire chronique. / Low-density granulocytes (LDGs) are a subset of neutrophils first described in the bloodstream upon pathological conditions. However, several studies also reported the presence of these cells in the blood of healthy patients. Whether LDGs characteristics, especially their enhanced pro-inflammatory profile, are specific to this subset of neutrophils and not related to disease states is unknown.
Thus, we sought to compare the properties of LDGs to those of autologous normal-density neutrophils (NDNs), in both health and disease. We studied 8 horses with severe equine asthma and 11 healthy animals. Neutrophil morphology was studied using optical microscopy, and content in myeloperoxidase and N-formylmethionine-leucyl-phenylalanine receptors (fMLP-R) evaluated using flow cytometry and immunofluorescence, respectively. Confocal microscopy was used to determine their functional capacity to spontaneously release neutrophil extracellular traps (NETs) stimulating with phorbol-12-myristate-13-acetate (PMA).
LDGs were smaller and contained more fMLP-R than NDNs, but myeloperoxidase content was similar in both populations of neutrophils. They also had an increased capacity to produce NETs, and were more sensitive to activation stimuli.
These characteristics were similar in both healthy and diseased horses, suggesting that these are intrinsic properties of LDGs. Furthermore, these results suggest that LDGs represent a population of primed and predominantly mature cells. Our study is the first to characterize LDGs in health, and to compare their characteristics with those of animals with a naturally occurring disease.
|
325 |
Preserving Data Integrity in Distributed SystemsTriebel, Marvin 30 November 2018 (has links)
Informationssysteme verarbeiten Daten, die logisch und physisch über Knoten verteilt sind. Datenobjekte verschiedener Knoten können dabei Bezüge zueinander haben. Beispielsweise kann ein Datenobjekt eine Referenz auf ein Datenobjekt eines anderen Knotens oder eine kritische Information enthalten. Die Semantik der Daten induziert Datenintegrität in Form von Anforderungen: Zum Beispiel sollte keine Referenz verwaist und kritische Informationen nur an einem Knoten verfügbar sein. Datenintegrität unterscheidet gültige von ungültigen Verteilungen der Daten.
Ein verteiltes System verändert sich in Schritten, die nebenläufig auftreten können. Jeder Schritt manipuliert Daten. Ein verteiltes System erhält Datenintegrität, wenn alle Schritte in einer Datenverteilung resultieren, die die Anforderungen von Datenintegrität erfüllen. Die Erhaltung von Datenintegrität ist daher ein notwendiges Korrektheitskriterium eines Systems. Der Entwurf und die Analyse von Datenintegrität in verteilten Systemen sind schwierig, weil ein verteiltes System nicht global kontrolliert werden kann.
In dieser Arbeit untersuchen wir formale Methoden für die Modellierung und Analyse verteilter Systeme, die mit Daten arbeiten. Wir entwickeln die Grundlagen für die Verifikation von Systemmodellen. Dazu verwenden wir algebraische Petrinetze. Wir zeigen, dass die Schritte verteilter Systeme mit endlichen vielen Transitionen eines algebraischen Petrinetzes beschrieben werden können, genau dann, wenn eine Schranke für die Bedingungen aller Schritte existiert. Wir verwenden algebraische Gleichungen und Ungleichungen, um Datenintegrität zu spezifizieren. Wir zeigen, dass die Erhaltung von Datenintegrität unentscheidbar ist, wenn alle erreichbaren Schritte betrachtet werden. Und wir zeigen, dass die Erhaltung von Datenintegrität entscheidbar ist, wenn auch unerreichbare Schritte berücksichtigt werden. Dies zeigen wir, indem wir die Berechenbarkeit eines nicht-erhaltenden Schrittes als Zeugen zeigen. / Information systems process data that is logically and physically distributed over many locations. Data entities at different locations may be in a specific relationship. For example, a data entity at one location may contain a reference to a data entity at a different location, or a data entity may contain critical information such as a password. The semantics of data entities induce data integrity in the form of requirements. For example, no references should be dangling, and critical information should be available at only one location. Data integrity discriminates between correct and incorrect data distributions.
A distributed system progresses in steps, which may occur concurrently. In each step, data is manipulated. Each data manipulation is performed locally and affects a bounded number of data entities. A distributed system preserves data integrity if each step of the system yields a data distribution that satisfies the requirements of data integrity. Preservation of data integrity is a necessary condition for the correctness of a system. Analysis and design are challenging, as distributed systems lack global control, employ different technologies, and data may accumulate unboundedly.
In this thesis, we study formal methods to model and analyze distributed data-aware systems. As a result, we provide a technology-independent framework for design-time analysis. To this end, we use algebraic Petri nets. We show that there exists a bound for the conditions of each step of a distributed system if and only if the steps can be described by a finite set of transitions of an algebraic Petri net. We use algebraic equations and inequalities to specify data integrity. We show that preservation of data integrity is undecidable in case we consider all reachable steps. We show that preservation of data integrity is decidable in case we also include unreachable steps. We show the latter by showing computability of a non-preserving step as a witness.
|
326 |
Metodologia para detecção e tratamento de falhas em sistemas de manufatura através de Rede de Petri. / Methodology for detection and treatment of failures in manufacturing systems applying Petri Nets.Luis Alberto Martínez Riascos 07 June 2002 (has links)
Falhas são eventos que não podem, pela sua própria natureza, serem totalmente eliminados num sistema de manufatura real. No entanto, a maioria das pesquisas e publicações técnicas nesta área consideram somente a descrição e otimização dos processos normais ou processos isolados de tratamento de falhas. Assim este trabalho é uma contribuição no desenvolvimento de uma metodologia de modelagem e análise que considera a detecção e o tratamento de falhas junto com os processos normais. A hipótese é que uma adequada abordagem de modelagem e análise de sistemas de manufatura considerando todas estas características é fundamental para melhorar a flexibilidade e autonomia do sistema. Tais sistemas podem ser abordados segundo a perspectiva de sistemas a eventos discretos (DEDS) e dentre as técnicas existentes de representação destes sistemas, destaca-se o potencial das rede de Petri (PN) como uma técnica uniforme de modelagem e análise, a qual permite o estudo e caracterização de diferentes propriedades de um sistema através de um mesmo modelo. Assim, este trabalho introduz uma metodologia, baseada no conceito de redes de Petri, que além da modelagem e a análise dos processos normais (de acordo com as especificações funcionais), permite a detecção e tratamento de falhas em sistemas de manufatura de uma forma hierárquica e modularizada utilizando supervisores distribuídos nos equipamentos do chão de fábrica. Esta metodologia considera a integração de três módulos referentes aos processos normais, aos processos de detecção de falhas e, aos processos de tratamento de falhas. Através das abordagens top-down e bottom-up a modelagem de um sistema é desenvolvida em níveis hierárquicos. Estudos de caso de sistemas com estas característica são considerados. Nos modelos desenvolvidos são realizados um estudo analítico e simulações para validar a metodologia proposta. / In a real manufacturing system, failures are events that should be considered. However in this area, most researches consider only the description and optimization of normal processes. This research is a contribution to develop a methodology for modeling and analyzing manufacturing system including normal processes, failure detection, and failure treatment. An approach considering those processes is basic for improving flexibility and autonomy of the systems. These systems can be observed from a point of view of discrete event dynamics systems (DEDS). From this point of view, Petri nets are a powerful tool for modeling and analyzing different characteristics of a system using the same model. In this research a methodology based on Petri nets considering normal process, detection, and treatment of failures in manufacturing systems is introduced. This methodology considers a hierarchical and modular structure. The modular characteristic permits integration of three types of processes: normal, failure detection, and failure treatment processes. The hierarchical characteristic permits to model a system by hierarchical levels (such as factory, manufacturing cell, and equipment) based on top-down and bottom-up approaches, and using distributed supervisors inside of machines on the workshop level. Case studies with these characteristics are considered. On the developed models, analytical and simulation analyses are executed to validate the proposed methodology.
|
327 |
Modelagem de arquiteturas reconfigur?veis com espa?os de ChuAra?jo, Camila de 28 July 2007 (has links)
Made available in DSpace on 2014-12-17T15:48:12Z (GMT). No. of bitstreams: 1
CamilaA.pdf: 551643 bytes, checksum: c211e0d0bbaf86da86337efffe6f407b (MD5)
Previous issue date: 2007-07-28 / The Reconfigurables Architectures had appeares as an alternative to the ASICs and the GGP, keeping a balance between flexibility and performance. This work presents a proposal for the modeling of Reconfigurables with Chu Spaces, describing the subjects main about this thematic.
The solution proposal consists of a modeling that uses a generalization of the Chu Spaces, called of Chu nets, to model the configurations of a Reconfigurables Architectures. To validate the models, three algorithms had been developed and implemented to compose configurable logic blocks, detection of controllability and observability in applications for Reconfigurables Architectures modeled by Chu nets / As Arquiteturas Reconfigur?veis surgiram no ambiente acad?mico como uma alternativa aos ASICs e aos GGP, mantendo um equil?brio entre flexibilidade e performance. Este trabalho apresenta uma proposta para a modelagem de Arquiteturas Reconfigur?veis com Espa?os de Chu, descrevendo os principais assuntos relativos a esta tem?tica.
A solu??o proposta consiste em uma modelagem que utiliza uma generaliza??o dos Espa?os de Chu, denominada de Chu nets, para modelar as configura??es de uma Arquitetura Reconfigur?vel. Como forma de validar os modelos, foram desenvolvidos e implementados tr?s algoritmos que realizam a composi??o de c?lulas l?gicas program?veis, detec??o dos vetores de controlabilidade e observabilidade em aplica??es para Arquiteturas Reconfigur?veis, que est?o modeladas atrav?s das Chu nets
|
328 |
Strategische Interaktion realer AgentenTagiew, Rustam 17 March 2011 (has links) (PDF)
Zum Verständnis menschlichen sozialen, administrativen und wirtschaftlichen Verhaltens, das als Spiel bzw. strategische Interaktion aufgefasst werden kann, reichen die rein analytischen Methoden nicht aus. Es ist nötig, Daten menschlichen strategischen Verhaltens zu sammeln. Basierend auf Daten lässt sich solches Verhalten modellieren, simulieren bzw. vorhersagen. Der theoretische Teil der Zielsetzung wird über praxisorientierte Konzeptualisierung strategischer Interaktion realer Agenten - Menschen und Maschinen - und gegenseitige Integration der Konzepte aus Spieltheorie und Multiagentensysteme erreicht, die über die bisherigen Ansätze hinausgehen. Der praktische Teil besteht darin, ein allgemein verwendbares System zu entwerfen, das strategische Interaktionen zwischen realen Agenten mit maximalen wissenschaftlichen Nutzen durchführen kann. Die tatsächliche Implementation ist eines der Ergebnisse der Arbeit. Ähnliche vorhandene Systeme sind GDL-Server (für Maschinen) [Genesereth u.a., 2005] und z-Tree (für Menschen) [Fischbacher, 2007]. Die Arbeit ist in drei Bereiche unterteilt - (1) Entwicklung von Sprachen für die Beschreibung eines Spiels, (2) ein auf diesen Sprachen basierendes Softwaresystem und (3) eine Offline-Analyse der u.a. mit dem System bereits gesammelten Daten als Beitrag zur Möglichkeiten der Verhaltensbeschreibung. Die Innovation dieser Arbeit besteht nicht nur darin ,einzelne Bereiche mit einander zu kombinieren, sondern auch Fortschritte auf jedem Bereich für sich allein zu erreichen. Im Bereich der Spielbeschreibungssprachen, werden zwei Sprachen - PNSI und SIDL - vorgeschlagen, die beide Spiele bei imperfekter Information in diskreter Zeit definieren können. Dies ist ein Fortschritt gegenüber der bisherigen Sprachen wie Gala und GDL. Speziell die auf Petrinetzen basierende Sprache PNSI kann gleichermaßen für Gameserver und für spieltheoretische Algorithmen von z.B. GAMBIT verwendet werden. Das entwickelte System FRAMASI basiert auf JADE [Bellifemine u.a., 2001] und ist den bisherigen Client-Server-Lösungen durch Vorteile der Multiagentensysteme voraus. Mit dem entstandenen System wurde bereits ein Experiment entsprechend den Standards der experimentellen Spieltheorie durchgeführt und somit die Praxistauglichkeit nachgewiesen. Das Experiment hatte als Ziel, Daten zur menschlichen Unvorhersagbarkeit und zur Vorhersagefähigkeit anderer zu liefen. Dafür wurden Varianten von \"Knobeln\" verwendet. Die Daten dieses Experiments sowie eines Experiments einer externen Arbeitsgruppe mit ähnlicher Motivation wurden mit Hilfe von Datamining analysiert. Dabei wurden die in der Literatur berichteten Gesetzmäßigkeiten des Verhaltens nachgewiesen und weitere Gesetzmäßigkeiten entdeckt. / To understand human social, administrative and economic behavior, which can be considered as a game or strategic interaction, the purely analytical methods do not suffice. It is necessary to gather data of human strategic behavior. Based on data, one can model, simulate and predict such behavior. The theoretical part of the objective is achieved using a practice oriented conceptualization of the real agents\' - humans and machines - strategic interaction and mutual integration of the concepts from game theory and multi-agent systems, which go beyond the related work. The practical part is the design of an universally usable system that can perform the strategic interactions between real agents with maximum scientific benefit. The current implementation is one of the results of the work. Similar existing systems are GDL-server (for machines) [Genesereth et al., 2005] and z-Tree (for humans) [Fischbacher, 2007]. The work is divided in three fields - (1) development of languages for the description of a game, (2) a software system based on these languages and (3) an offline analysis of the data already gathered among other things using the system as a contribution to behavior definition facilities. The innovation of this work does not consist only in combining of the several fields to each other, but also in achieving of improvements in every field on its own. In the field of game definition languages, two languages are proposed - PNSI and SIDL, which both can define games of imperfect information in discrete time. It is an improvement comparing with hitherto languages as Gala and GDL. Especially, the Petri net based language PNSI can likewise be used for game servers and game theoretic algorithms like GAMBIT. The developed system FRAMASI is based on JADE [Bellifemine et al., 2001] and is ahead of the hitherto client-server solutions through the advantages of the multi-agent systems. Using the originated system, an experiment has been conducted according to the standards from the experimental game theory, and thus demonstrated the practicability. The experiment had the objective to provide data on the human unpredictability and the ability to predict others. Therefore, variants of Roshambo were used. The data from this experiment and from an experiment of an external workgroup with a similar motivation were analyzed using data mining. As results, the regularities of the behavior reported in literature have been demonstrated and further regularities have been discovered.
|
329 |
AVALIAÇÃO DE AÇÕES PREVENTIVAS DE RISCOS UTILIZANDO TEORIA DE DECISÃO E REDES DE PETRI COLORIDAS / EVALUATION OF PREVENTIVE ACTIONS USING THEORY OF RISK AND DECISION COLORED PETRI NETSBiasoli, Daniel 18 April 2012 (has links)
Risk management in software projects involves the definition of actions to
prevent risks identified for the project in order to minimize their effects or eliminate
them.The definition of preventive actions, and especially to assess their efficacy in
eliminating a risk is not a trivial task. The objective of this research is to identify and
propose a method for evaluation of preventive actions to mitigate or eliminate risks in
software projects. This assessment is based on supporting a quantitative analysis
driven Decision Theory and modeled and simulated by means of colored Petri
nets. The choice of theme reveals the importance of predicting the impact and efficacy
of preventive measures in software projects, anticipating their possible outcomes and
enhancing their uses. The development of this research resulted in three distinct
stages of study, mutually complementary and performed in different periods: a) define
an approach to model and simulate processes that were widely accepted by the
scientific community, b) identify a theoretical basis that was able to establish a criterion
to support the decision making process and therefore evaluate the impact of preventive
risk in software development projects; c) evaluating the results of simulation based on
the modeling of preventive risk using the previously established theoretical basis. The
study is an exploratory, descriptive and analytical, combined with documentary analysis
of literature sources, from documents and information from the literature. The proposed
method consists of introducing a formal step in the evaluation process of preventive
risk.The simulation with colored Petri nets, aided by the Theory of Decision by the
Bayes Theorem, and has made the process more understandable, provided a more
effective participation by experts involved, and allow formal mathematical
representation coupled to mechanisms analysis to inspect risks adapted processes. / O gerenciamento de riscos em projetos de software envolve a definição de ações
para prevenir riscos identificados para o projeto, visando minimizar seus efeitos ou
eliminá-los. A definição de ações preventivas e, principalmente, a avaliação da eficácia
destas na eliminação de um risco, não é uma tarefa trivial. O objetivo desta pesquisa é
identificar e propor um método de avaliação de ações preventivas para mitigar ou
eliminar riscos, em projetos de software. Esta avaliação tem como base de sustentação
uma análise quantitativa orientada pela Teoria de Decisão e modelada e simulada por
meio de redes de Petri coloridas. A opção pelo tema revela a importância de prever o
impacto e a eficácia da utilização de ações preventivas em projetos de software,
antecipando seus possíveis resultados e potencializando suas utilizações. A elaboração
desta pesquisa implicou em três etapas de estudos distintas, complementares entre si e
realizadas em períodos distintos: a) definir uma abordagem para modelar e simular
processos que fosse amplamente aceito pela comunidade científica; b) identificar uma
base teórica que fosse capaz de estabelecer um critério para apoiar o processo decisório
e, consequentemente avaliar o impacto de ações preventivas de riscos em projetos de
desenvolvimento de software; c) avaliar os resultados da simulação baseados na
modelagem de ações preventivas de riscos utilizando a base teórica previamente
estabelecida. O estudo realizado é de natureza exploratória, analítica e descritiva,
combinado com análise documental em fontes bibliográficas, a partir de documentos e
informações extraídas na literatura. O método proposto consiste da introdução de uma
etapa formal ao processo de avaliação de ações preventivas de riscos. A simulação com
redes de Petri coloridas, auxiliada pela Teoria de Decisão por meio do Teorema de
Bayes, além de ter tornado os processos mais compreensíveis, proporcionou uma
participação mais efetiva por parte de especialistas envolvidos, além de permitir uma
representação matemática formal acoplada a mecanismos de análise para inspecionar
riscos em processos adaptados.
|
330 |
Metodologia para detecção e tratamento de falhas em sistemas de manufatura através de Rede de Petri. / Methodology for detection and treatment of failures in manufacturing systems applying Petri Nets.Martínez Riascos, Luis Alberto 07 June 2002 (has links)
Falhas são eventos que não podem, pela sua própria natureza, serem totalmente eliminados num sistema de manufatura real. No entanto, a maioria das pesquisas e publicações técnicas nesta área consideram somente a descrição e otimização dos processos normais" ou processos isolados de tratamento de falhas. Assim este trabalho é uma contribuição no desenvolvimento de uma metodologia de modelagem e análise que considera a detecção e o tratamento de falhas junto com os processos normais". A hipótese é que uma adequada abordagem de modelagem e análise de sistemas de manufatura considerando todas estas características é fundamental para melhorar a flexibilidade e autonomia do sistema. Tais sistemas podem ser abordados segundo a perspectiva de sistemas a eventos discretos (DEDS) e dentre as técnicas existentes de representação destes sistemas, destaca-se o potencial das rede de Petri (PN) como uma técnica uniforme de modelagem e análise, a qual permite o estudo e caracterização de diferentes propriedades de um sistema através de um mesmo modelo. Assim, este trabalho introduz uma metodologia, baseada no conceito de redes de Petri, que além da modelagem e a análise dos processos normais" (de acordo com as especificações funcionais), permite a detecção e tratamento de falhas em sistemas de manufatura de uma forma hierárquica e modularizada utilizando supervisores distribuídos nos equipamentos do chão de fábrica. Esta metodologia considera a integração de três módulos referentes aos processos normais", aos processos de detecção de falhas e, aos processos de tratamento de falhas. Através das abordagens top-down" e bottom-up" a modelagem de um sistema é desenvolvida em níveis hierárquicos. Estudos de caso de sistemas com estas característica são considerados. Nos modelos desenvolvidos são realizados um estudo analítico e simulações para validar a metodologia proposta. / In a real manufacturing system, failures are events that should be considered. However in this area, most researches consider only the description and optimization of normal processes. This research is a contribution to develop a methodology for modeling and analyzing manufacturing system including normal processes, failure detection, and failure treatment. An approach considering those processes is basic for improving flexibility and autonomy of the systems. These systems can be observed from a point of view of discrete event dynamics systems (DEDS). From this point of view, Petri nets are a powerful tool for modeling and analyzing different characteristics of a system using the same model. In this research a methodology based on Petri nets considering normal process, detection, and treatment of failures in manufacturing systems is introduced. This methodology considers a hierarchical and modular structure. The modular characteristic permits integration of three types of processes: normal, failure detection, and failure treatment processes. The hierarchical characteristic permits to model a system by hierarchical levels (such as factory, manufacturing cell, and equipment) based on top-down and bottom-up approaches, and using distributed supervisors inside of machines on the workshop level. Case studies with these characteristics are considered. On the developed models, analytical and simulation analyses are executed to validate the proposed methodology.
|
Page generated in 0.0777 seconds