11 |
Fluxo de dados em redes de Petri coloridas e em grafos orientados a atores / Dataflow in colored Petri nets and in actors-oriented workflow graphsGrace Anne Pontes Borges 11 September 2008 (has links)
Há três décadas, os sistemas de informação corporativos eram projetados para apoiar a execução de tarefas pontuais. Atualmente, esses sistemas também precisam gerenciar os fluxos de trabalho (workflows) e processos de negócio de uma organização. Em comunidades científicas de físicos, astrônomos, biólogos, geólogos, entre outras, seus sistemas de informações distinguem-se dos existentes em ambientes corporativos por: tarefas repetitivas (como re-execução de um mesmo experimento), processamento de dados brutos em resultados adequados para publicação; e controle de condução de experimentos em diferentes ambientes de hardware e software. As diferentes características dos dois ambientes corporativo e científico propiciam que ferramentas e formalismos existentes ou priorizem o controle de fluxo de tarefas, ou o controle de fluxo de dados. Entretanto, há situações em que é preciso atender simultaneamente ao controle de transferência de dados e ao controle de fluxo de tarefas. Este trabalho visa caracterizar e delimitar o controle e representação do fluxo de dados em processos de negócios e workflows científicos. Para isso, são comparadas as ferramentas CPN Tools e KEPLER, que estão fundamentadas em dois formalismos: redes de Petri coloridas e grafos de workflow orientados a atores, respectivamente. A comparação é feita por meio de implementações de casos práticos, usando os padrões de controle de dados como base de comparação entre as ferramentas. / Three decades ago, business information systems were designed to support the execution of individual tasks. Todays information systems also need to support the organizational workflows and business processes. In scientific communities composed by physicists, astronomers, biologists, geologists, among others, information systems have different characteristics from those existing in business environments, like: repetitive procedures (such as re-execution of an experiment), transforming raw data into publishable results; and coordinating the execution of experiments in several different software and hardware environments. The different characteristics of business and scientific environments propitiate the existence of tools and formalisms that emphasize control-flow or dataflow. However, there are situations where we must simultaneously handle the data transfer and control-flow. This work aims to characterize and define the dataflow representation and control in business processes and scientific workflows. In order to achieve this, two tools are being compared: CPN Tools and KEPLER, which are based in the formalisms: colored Petri nets and actors-oriented workflow graphs, respectively. The comparison will be done through implementation of practical cases, using the dataflow patterns as comparison basis.
|
12 |
La vérification de patrons de workflow métier basés sur les flux de contrôle : une approche utilisant les systèmes à base de connaissances / Control flow-based business workflow templates checking : an approach using the knowledge-based systemsNguyen, Thi Hoa Hue 23 June 2015 (has links)
Cette thèse traite le problème de la modélisation des patrons de workflow sémantiquement riche et propose un processus pour développer des patrons de workflow. L'objectif est de transformer un processus métier en un patron de workflow métier basé sur les flux de contrôle qui garantit la vérification syntaxique et sémantique. Les défis majeurs sont : (i) de définir un formalisme permettant de représenter les processus métiers; (ii) d'établir des mécanismes de contrôle automatiques pour assurer la conformité des patrons de workflow métier basés sur un modèle formel et un ensemble de contraintes sémantiques; et (iii) d’organiser la base de patrons de workflow métier pour le développement de patrons de workflow. Nous proposons un formalisme qui combine les flux de contrôle (basés sur les Réseaux de Petri Colorés (CPNs)) avec des contraintes sémantiques pour représenter les processus métiers. L'avantage de ce formalisme est qu'il permet de vérifier non seulement la conformité syntaxique basée sur le modèle de CPNs mais aussi la conformité sémantique basée sur les technologies du Web sémantique. Nous commençons par une phase de conception d'une ontologie OWL appelée l’ontologie CPN pour représenter les concepts de patrons de workflow métier basés sur CPN. La phase de conception est suivie par une étude approfondie des propriétés de ces patrons pour les transformer en un ensemble d'axiomes pour l'ontologie. Ainsi, dans ce formalisme, un processus métier est syntaxiquement transformé en une instance de l’ontologie. / This thesis tackles the problem of modelling semantically rich business workflow templates and proposes a process for developing workflow templates. The objective of the thesis is to transform a business process into a control flow-based business workflow template that guarantees syntactic and semantic validity. The main challenges are: (i) to define formalism for representing business processes; (ii) to establish automatic control mechanisms to ensure the correctness of a business workflow template based on a formal model and a set of semantic constraints; and (iii) to organize the knowledge base of workflow templates for a workflow development process. We propose a formalism which combines control flow (based on Coloured Petri Nets (CPNs)) with semantic constraints to represent business processes. The advantage of this formalism is that it allows not only syntactic checks based on the model of CPNs, but also semantic checks based on Semantic Web technologies. We start by designing an OWL ontology called the CPN ontology to represent the concepts of CPN-based business workflow templates. The design phase is followed by a thorough study of the properties of these templates in order to transform them into a set of axioms for the CPN ontology. In this formalism, a business process is syntactically transformed into an instance of the CPN ontology. Therefore, syntactic checking of a business process becomes simply verification by inference, by concepts and by axioms of the CPN ontology on the corresponding instance.
|
13 |
Méthodologie de conception d'un modèle comportemental pour la vérification formelleBastien, Frédéric January 2006 (has links)
Mémoire numérisé par la Direction des bibliothèques de l'Université de Montréal.
|
14 |
Protocol engineering for protection against denial-of-service attacksTritilanunt, Suratose January 2009 (has links)
Denial-of-service attacks (DoS) and distributed denial-of-service attacks (DDoS) attempt to temporarily disrupt users or computer resources to cause service un- availability to legitimate users in the internetworking system. The most common type of DoS attack occurs when adversaries °ood a large amount of bogus data to interfere or disrupt the service on the server. The attack can be either a single-source attack, which originates at only one host, or a multi-source attack, in which multiple hosts coordinate to °ood a large number of packets to the server. Cryptographic mechanisms in authentication schemes are an example ap- proach to help the server to validate malicious tra±c. Since authentication in key establishment protocols requires the veri¯er to spend some resources before successfully detecting the bogus messages, adversaries might be able to exploit this °aw to mount an attack to overwhelm the server resources. The attacker is able to perform this kind of attack because many key establishment protocols incorporate strong authentication at the beginning phase before they can iden- tify the attacks. This is an example of DoS threats in most key establishment protocols because they have been implemented to support con¯dentiality and data integrity, but do not carefully consider other security objectives, such as availability. The main objective of this research is to design denial-of-service resistant mechanisms in key establishment protocols. In particular, we focus on the design of cryptographic protocols related to key establishment protocols that implement client puzzles to protect the server against resource exhaustion attacks. Another objective is to extend formal analysis techniques to include DoS- resistance. Basically, the formal analysis approach is used not only to analyse and verify the security of a cryptographic scheme carefully but also to help in the design stage of new protocols with a high level of security guarantee. In this research, we focus on an analysis technique of Meadows' cost-based framework, and we implement DoS-resistant model using Coloured Petri Nets. Meadows' cost-based framework is directly proposed to assess denial-of-service vulnerabil- ities in the cryptographic protocols using mathematical proof, while Coloured Petri Nets is used to model and verify the communication protocols using inter- active simulations. In addition, Coloured Petri Nets are able to help the protocol designer to clarify and reduce some inconsistency of the protocol speci¯cation. Therefore, the second objective of this research is to explore vulnerabilities in existing DoS-resistant protocols, as well as extend a formal analysis approach to our new framework for improving DoS-resistance and evaluating the performance of the new proposed mechanism. In summary, the speci¯c outcomes of this research include following results; 1. A taxonomy of denial-of-service resistant strategies and techniques used in key establishment protocols; 2. A critical analysis of existing DoS-resistant key exchange and key estab- lishment protocols; 3. An implementation of Meadows's cost-based framework using Coloured Petri Nets for modelling and evaluating DoS-resistant protocols; and 4. A development of new e±cient and practical DoS-resistant mechanisms to improve the resistance to denial-of-service attacks in key establishment protocols.
|
15 |
Uma abordagem para avaliar o desempenho de algoritmos baseada em simulações automáticas de modelos de Petri coloridas hierárquicasMoraes Júnior, Clarimundo Machado 03 March 2017 (has links)
Dentre as várias abordagens consagradas para análise de desempenho de algoritmos em termos de tempo de execução, destacam-se, por exemplo, a análise assintótica, as técnicas de recorrências e a análise probabilística. Entretanto, há algoritmos que apresentam certas peculiaridades que tornam o uso dessas técnicas puramente matemáticas de avaliação de desempenho inadequadas ou excessivamente árduas. É o caso, por exemplo, de algoritmos cujo tempo de execução pode variar significativamente para um mesmo dado de entrada em função da dinâmica de execução. O mesmo acontece no caso de algoritmos distribuídos em que, dependendo da complexidade da política de distribuição utilizada, a avaliação por meio de métodos analíticos do efeito de um gradual incremento de processadores no seu tempo de execução pode tornar-se impraticável. Em situações como essas, a fim de evitar a alta complexidade matemática envolvida na análise de desempenho desses algoritmos, algumas alternativas baseadas em métodos empíricos ou em modelagem visual vêm sendo adotada pelos pesquisadores. Contudo, ambas alternativas apresentam inconvenientes: no caso dos métodos empíricos, eles requerem a implementação dos algoritmos analisados, o que tem um efeito perverso particularmente no caso dos algoritmos distribuídos, uma vez que eles demandam a aquisição prévia de recursos de hardware dispendiosos de multi-processamento antes mesmo de saber se a proposta de distribuição investigada, de fato, vale a pena. Já as abordagens baseadas em modelos visuais atualmente utilizadas (baseadas em grafos, autômatos e Unified Modeling Language - UML) não contam com os recursos dinâmicos necessários para lidar com a avaliação do tempo de execução dos algoritmos. Neste cenário, o presente trabalho propõe uma abordagem visual formal para avaliar o tempo de execução de algoritmos baseada em simulações automáticas de modelos de Redes de Petri Coloridas Hierárquicas (RdPCH) no ambiente gráfico CPN Tools. A abordagem proposta é validada por meio de cálculo dos seguintes parâmetros associados aos algoritmos usados como estudo de caso: a função de complexidade, o tempo de execução real e, no caso dos algoritmos distribuídos, o speedup e a eficiência. Foram usados como estudos de caso os seguintes três relevantes algoritmos de busca usados nos agentes da Inteligência Artificial com a finalidade de definir as ações mais apropriadas que tais agentes devem executar de modo a cumprir seu objetivo, com êxito, em um ambiente em que um oponente tenta minimizar suas chances de sucesso: os algoritmos seriais Minimax e Alpha-Beta; e o algoritmo distribuído PVS. Os resultados obtidos confirmam a correção da abordagem proposta. / Among the various approaches established for analyzing the performance of algorithms in terms of runtime, one can highlight as examples, the asymptotic analysis, recurrence techniques, and probabilistic analysis. However, there are algorithms that present certain peculiarities, which make the use of these purely mathematical performance evaluation techniques inadequate or excessively difficult. This is the case of algorithms for which the runtime varies significantly for the same input data, due to its execution dynamics. The same happens in the case of distributed algorithms, where depending on the complexity of the distribution policy being used, the evaluation by means of analytical methods concerning the effect of a gradual increase in processors on runtime can become impractical. In situations such as these, in order to avoid the high mathematical complexity involved in the performance analysis, researchers have adopted alternatives based on empirical methods or visual modeling. However, both alternatives have drawbacks: in the case of empirical methods, they require the implementation of analyzed algorithms, which has a perverse effect especially in the case of distributed algorithms. This occurs, as they demand the acquisition beforehand of expensive multiprocessing hardware resources before knowing if the distribution proposal under investigation is in fact viable. On the other hand, those approaches based on visual models currently under use (based on graphs, automata and Unified Modeling Language) one notes that does not contain the necessary dynamic resources for dealing with the runtime evaluation of algorithms. In this scenario, the present study proposes a formal visual approach, in order to evaluate the algorithm runtime based on automatic simulations of Hierarchical Colored Petri Net models in the CPN Tools graphic environment. The proposed approach is validated through the calculation of the following parameters associated with the algorithms being used as a case study: the complexity function, real runtime, and in the case of distributed algorithms, speedup and efficiency. The cases studies were based on three relevant search algorithms used in the agents of the Artificial Intelligence, with the aim of defining the most appropriate actions that these agents need to execute in order to fulfil their objective, in an environment where the opponent tries to minimize their chances of success. The algorithms under consideration were the serial algorithms Minimax and Alpha-Beta; and the PVS distributed algorithm. The obtained results confirm the correction of the proposed approach. / Tese (Doutorado)
|
16 |
Pricing With Uncertainty : The impact of uncertainty in the valuation models ofDupire and Black&ScholesZetoun, Mirella January 2013 (has links)
Theaim of this master-thesis is to study the impact of uncertainty in the local-and implied volatility surfaces when pricing certain structured products suchas capital protected notes and autocalls. Due to their long maturities, limitedavailability of data and liquidity issue, the uncertainty may have a crucialimpact on the choice of valuation model. The degree of sensitivity andreliability of two different valuation models are studied. The valuation models chosen for this thesis are the local volatility model of Dupire and the implied volatility model of Black&Scholes. The two models are stress tested with varying volatilities within an uncertainty interval chosen to be the volatilities obtained from Bid and Ask market prices. The volatility surface of the Mid market prices is set as the relative reference and then successively scaled up and down to measure the uncertainty.The results indicates that the uncertainty in the chosen interval for theDupire model is of higher order than in the Black&Scholes model, i.e. thelocal volatility model is more sensitive to volatility changes. Also, the pricederived in the Black&Scholes modelis closer to the market price of the issued CPN and the Dupire price is closer tothe issued Autocall. This might be an indication of uncertainty in thecalibration method, the size of the chosen uncertainty interval or the constantextrapolation assumption.A further notice is that the prices derived from the Black&Scholes model areoverall higher than the prices from the Dupire model. Another observation ofinterest is that the uncertainty between the models is significantly greaterthan within each model itself. / Syftet med dettaexamensarbete är att studera inverkan av osäkerhet, i prissättningen av struktureradeprodukter, som uppkommer på grund av förändringar i volatilitetsytan. I dennastudie värderas olika slags autocall- och kapitalskyddade struktureradeprodukter. Strukturerade produkter har typiskt långa löptider vilket medförosäkerhet i värderingen då mängden data är begränsad och man behöver ta tillextrapolations metoder för att komplettera. En annan faktor som avgörstorleksordningen på osäkerheten är illikviditeten, vilken mäts som spreadenmellan listade Bid och Ask priset. Dessa orsaker ligger bakom intresset attstudera osäkerheten för långa löptider över alla lösenpriser och dess inverkanpå två olika värderingsmodeller.Värderingsmodellerna som används i denna studie är Dupires lokala volatilitetsmodell samt Black&Scholes implicita volatilitets modell. Dessa ställs motvarandra i en jämförelse gällande stabilitet och förmåga att fånga uppvolatilitets ändringar. Man utgår från Mid volatilitetsytan som referens ochuppmäter prisändringar i intervallet från Bid upp till Ask volatilitetsytornagenom att skala Mid ytan. Resultaten indikerar på större prisskillnader inom Dupires modell i jämförelsemot Black&Scholes. Detta kan tolkas som att Dupires modell är mer känslig isammanhanget och har en starkare förmåga att fånga upp förändringar isvansarna. Vidare notering är att priserna beräknade i Dupire är relativtbilligare än motsvarande från Black&Scholes modellen. En ytterligareobservation är att osäkerheten mellan värderingsmodellerna är av högre ordningän inom var modell för sig. Ett annat resultat visar att CPN priset beräknat iBlack&Scholes modell ligger närmast marknadspriset medans marknadsprisetför Autocallen ligger närmare Dupires. Detta kan vara en indikation påosäkerheten i kalibreringsmetoden eventuellt det valda osäkerhetsintervalletoch konstanta extrapolations antagandet.
|
17 |
Generation of multi-level and multi-user games through modelling in hierarchical coloured Petri nets / GeraÃÃo de jogos multinÃveis e com mÃltiplos usuÃrios por meio de modelagem em Redes de Petri coloridas hierÃrquicasVanessa Viana da Silva Carvalho 19 December 2014 (has links)
CoordenaÃÃo de AperfeÃoamento de Pessoal de NÃvel Superior / This work presents a method to generate games with multilevel and multiple users using Hierarchical Coloured Petri Nets. The design of a multiuser game containing multiple navigation environments (multilevel) is made from the model of a Hierarchical Coloured Petri Net, in which are specified all of the rules, properties and structures of the game. The created model for the game can be formally analysed, verifying the existence of deadlocks and invalid paths, for example, and others possible conception problems, that can be done with the tools available on CPN Tools. To validate this method, a tool has been developed, called CPN Games, which allows fast and dynamic development of simple conception games only using Hierarchical Coloured Petri Nets. The XML codes of the designed models in CPN Tools are interpreted by CPN Games following a set of predefined rules to instantiate different games, without additional programing. It is demonstrated the mechanisms of the design, analysis and validation of the models and finally it is presented several examples of games developed by this tool. / Este trabalho apresenta um mÃtodo para geraÃÃo de jogos multinÃveis e com mÃltiplos usuÃrios por meio de modelagem em Redes de Petri Coloridas HierÃrquicas. A concepÃÃo de um jogo multiusuÃrio, contendo mÃltiplos ambientes de navegaÃÃo (multinÃveis), à feita a partir da criaÃÃo de um modelo hierÃrquico em Rede de Petri Colorida, no qual se especificam todas as regras, propriedades e estruturas do jogo. O modelo criado para o jogo pode ser analisado formalmente, verificando-se, por exemplo, a existÃncia de bloqueios e de transiÃÃes mortas (caminhos invÃlidos), entre outros possÃveis problemas de concepÃÃo, o que pode ser feito com a utilizaÃÃo de ferramentas disponÃveis no CPN Tools. Para validar esse mÃtodo, foi concebida uma ferramenta, denominada CPN Games, que permite o desenvolvimento rÃpido e dinÃmico de jogos de concepÃÃo simples exclusivamente por Rede de Petri Colorida HierÃrquica. Os cÃdigos em XML de modelos constituÃdos com o CPN Tools sÃo interpretadas pelo CPN Games seguindo um conjunto de regras prà estabelecidas para instanciar diferentes jogos, sem a necessidade de programaÃÃo adicional. SÃo demonstrados os mecanismos de criaÃÃo, anÃlise e validaÃÃo dos modelos e diferentes exemplos de jogos construÃdos com a ferramenta.
|
18 |
Analysis of Proportional Navigation Class of Guidance Law against Agile TargetsGhosh, Satadal January 2014 (has links) (PDF)
Guidance is defined as the determination of a strategy for following a nominal path in the presence of o-nominal conditions, disturbances and uncertainties, and the strategy employed is called a guidance law. Variants of Proportional Navigation (PN), such as True Proportional Navigation (TPN) and Pure Proportional Navigation (PPN), have been studied extensively in the literature on tactical missile guidance. In the absence of target maneuvers, in a linear interceptor guidance problem, TPN was shown to be optimal. However, the standard PN class of guidance laws per se does not show good performance against maneuvering targets, and was found to be eective in intercepting a maneuvering target only from a restrictive set of initial geometries. Also, since these guidance laws were eectively designed for lower speed targets, they show a degraded performance when applied against higher speed targets. However, in the current defense scenario, two classes of agile targets, which are capable of continuous maneuver, and/or of much higher speed than the interceptor, are a reality. This thesis presents analysis of several variants of PN class of guidance laws against these two classes of agile targets.
In the literature, an augmentation of the TPN guidance law, termed as Augmented Proportional Navigation (APN), was shown to be optimal in linearized engagement framework. The present work proposes an augmentation of the PPN guidance law, which is more realistic than TPN for an aerodynamically controlled interceptor, and an-alyzes its capturability in fully nonlinear framework, and develops sauciest conditions on speed ratio, navigation gain and augmentation parameter to ensure that all possible initial engagement geometries are included in the capture zone when applied against a target executing piecewise continuous maneuver. The thesis also obtains the capture zone in the relative velocity space for augmented PPN guidance law.
In the literature, a novel guidance law was proposed for the interception of higher speed targets in planar engagement by using a negative navigation gain instead of the standard positive one, and was termed as Retro-PN. It was shown that even though the Retro-PN guided interceptor takes more time than PN guided one in achieving successful interception, Retro-PN performs significantly better than the classical PN law, in terms of capturability, lateral acceleration demand, and closing velocity, when used against higher speed targets. The thesis analyzes Retro-PN guidance law in 3-D engagement geometries to yield the complete capture zone of interceptors guided by Retro-PN guidance philosophy, and derives necessary and sucient conditions for the capture of higher speed non-maneuvering targets with and without a constraint on finiteness of lateral acceleration.
Terminal impact angle control is crucial for enhancement of warhead eectiveness. In the literature, this problem has been addressed mostly in the context of targets with lower speeds than the interceptor. The thesis analyzes the performance of a composite PN guidance law, that uses standard PPN and the Retro-PN guidance laws based on initial engagement geometry and requirement of impact angle, against higher speed non-maneuvering targets. Then, to expand the set of achievable impact angles, it proposes a modified composite PN guidance scheme, and analyzes the same.
For implementation of many modern guidance laws, a good estimate of time-to-go is essential. This requirement is especially severe in case of impact time constrained en-gagement scenarios. To this end, an ecient and fast time-to-go estimation algorithm for generic 3-D engagement is required. Two time-to-go estimation algorithms are presented and analyzed in this work for the engagement of a PPN or Retro-PN guided interceptor and a higher speed target. The first one is a closed form approximation of time-to-go in terms of range, nominal closing speed and an indicator of heading error, and the second one is a numerical recursive time-to-go estimation algorithm.
To improve the odds of intercepting an intelligent target and destroying it, a salvo attack of two or more interceptors could be considered as a viable option. Moreover, this simultaneous salvo attack can also be further improved in eciency by incorporating the shoot-look-shoot approach in making a decision about launching interceptors. This can be considered as the first step towards a layered defense system, which has been described in the literature as a potentially eective strategy against short range or long range ballistic threat. To this end, the present work proposes two PPN and Retro-PN based guidance strategies for achieving simultaneous salvo attack on a higher speed non-maneuvering target. For the implementation of the same the numerical recursive time-to-go estimation technique proposed in this work is utilized
|
Page generated in 0.0302 seconds