• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 128
  • 102
  • 42
  • 20
  • 9
  • 6
  • 3
  • 3
  • 3
  • 2
  • 1
  • 1
  • 1
  • 1
  • 1
  • Tagged with
  • 349
  • 349
  • 120
  • 109
  • 53
  • 52
  • 50
  • 47
  • 43
  • 41
  • 38
  • 38
  • 32
  • 31
  • 31
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
141

A Petri Net based Modeling and Verification Technique for Real-Time Embedded Systems

Cortés, Luis Alejandro January 2001 (has links)
Embedded systems are used in a wide spectrum of applications ranging from home appliances and mobile devices to medical equipment and vehicle controllers. They are typically characterized by their real-time behavior and many of them must fulfill strict requirements on reliability and correctness. In this thesis, we concentrate on aspects related to modeling and formal verification of realtime embedded systems. First, we define a formal model of computation for real-time embedded systems based on Petri nets. Our model can capture important features of such systems and allows their representations at different levels of granularity. Our modeling formalism has a welldefined semantics so that it supports a precise representation of the system, the use of formal methods to verify its correctness, and the automation of different tasks along the design process. Second, we propose an approach to the problem of formal verification of real-time embedded systems represented in our modeling formalism. We make use of model checking to prove whether certain properties, expressed as temporal logic formulas, hold with respect to the system model. We introduce a systematic procedure to translate our model into timed automata so that it is possible to use available model checking ools. Various examples, including a realistic industrial case, demonstrate the feasibility of our approach on practical applications.
142

Modelling Concurrent Systems with Object-Oriented Coloured Petri Nets

Wu, Angela January 2003 (has links)
<p> This thesis presents a new modelling technique for the complex current system. It integrates object-oriented methodology into Petri Nets formalism.</p> <p> Petri Nets are used for modelling concurrent systems. They have natural graphical representation as well as formal specifications. They have been successfully used in various industrial applications. But with the development of distributed and network systems, their traditional weakness, namely their inadequate support for compositionality, is a big obstacle to their practical use for large, complex systems. To address this problem, we introduce the Object-Oriented Coloured Petri Nets (OO-CPN), which integrates the powerful modularity of an object-oriented paradigm into Petri Nets formalism. OO-CPN is based on Coloured Petri Nets and supports the concepts of object, class, inheritance and polymorphism.</p> / Thesis / Master of Science (MSc)
143

Modelling and Quantitative Analysis of Performance vs Security Trade-offs in Computer Networks: An investigation into the modelling and discrete-event simulation analysis of performance vs security trade-offs in computer networks, based on combined metrics and stochastic activity networks (SANs)

Habib Zadeh, Esmaeil January 2017 (has links)
Performance modelling and evaluation has long been considered of paramount importance to computer networks from design through development, tuning and upgrading. These networks, however, have evolved significantly since their first introduction a few decades ago. The Ubiquitous Web in particular with fast-emerging unprecedented services has become an integral part of everyday life. However, this all is coming at the cost of substantially increased security risks. Hence cybercrime is now a pervasive threat for today’s internet-dependent societies. Given the frequency and variety of attacks as well as the threat of new, more sophisticated and destructive future attacks, security has become more prevalent and mounting concern in the design and management of computer networks. Therefore equally important if not more so is security. Unfortunately, there is no one-size-fits-all solution to security challenges. One security defence system can only help to battle against a certain class of security threats. For overall security, a holistic approach including both reactive and proactive security measures is commonly suggested. As such, network security may have to combine multiple layers of defence at the edge and in the network and in its constituent individual nodes. Performance and security, however, are inextricably intertwined as security measures require considerable amounts of computational resources to execute. Moreover, in the absence of appropriate security measures, frequent security failures are likely to occur, which may catastrophically affect network performance, not to mention serious data breaches among many other security related risks. In this thesis, we study optimisation problems for the trade-offs between performance and security as they exist between performance and dependability. While performance metrics are widely studied and well-established, those of security are rarely defined in a strict mathematical sense. We therefore aim to conceptualise and formulate security by analogy with dependability so that, like performance, it can be modelled and quantified. Having employed a stochastic modelling formalism, we propose a new model for a single node of a generic computer network that is subject to various security threats. We believe this nodal model captures both performance and security aspects of a computer node more realistically, in particular the intertwinements between them. We adopt a simulation-based modelling approach in order to identify, on the basis of combined metrics, optimal trade-offs between performance and security and facilitate more sophisticated trade-off optimisation studies in the field. We realise that system parameters can be found that optimise these abstract combined metrics, while they are optimal neither for performance nor for security individually. Based on the proposed simulation modelling framework, credible numerical experiments are carried out, indicating the scope for further work extensions for a systematic performance vs security tuning of computer networks.
144

[en] USE OF PETRI NET TO MODEL RESOURCE ALLOCATION IN PROCESS MINING / [pt] USO DE REDES DE PETRI NA MODELAGEM DE ALOCAÇÃO DE RECURSOS EM MINERAÇÃO DE PROCESSOS

BEATRIZ MARQUES SANTIAGO 22 November 2019 (has links)
[pt] Business Process Management é a ciência de observar como o trabalho é realizado em determinada organização garantindo produtos consistentes e se aproveitando de oportunidades de melhoria. Atualmente, boa parte dos processos são realizados em frameworks, muitos com armazenamento de arquivos de log, no qual é disponibilizada uma grande quantidade de informação que pode ser explorada de diferentes formas e com diferentes objetivos, área denominada como Mineração de Processos. Apesar de muitos desses dados contemplarem o modo como os recursos são alocados para cada atividade, o foco maior dos trabalhos nessa área é na descoberta do processo e na verificação de conformidade do mesmo. Nesta dissertação é proposto um modelo em petri net que incorpora a alocação de recurso, de forma a poder explorar as propriedades deste tipo de modelagem, como por exemplo a definição de todos os estados possíveis. Como aplicação do modelo, realizou-se um estudo comparativo entre duas políticas, uma mais especialista, de alocação de recurso, e outra mais generalista usando simulações de Monte Carlo com distribuição de probabilidade exponencial para o início de novos casos do processo e para estimação do tempo de execução do par recurso atividade. Sendo assim, para avaliação de cada política foi usado um sistema de pontuação que considera o andamento do processo e o tempo total de execução do mesmo. / [en] Business Process Management is the science of observing how the work is performed in a given organization ensuring consistent products and seeking opportunities for improvement. Currently, most of the processes are performed in frameworks, many with log files, in which a large amount of data is available. These data can be explored in different ways and with different objectives, giving rise to the Process Mining area. Although many of these data informs how resources are allocated for each activity, the major focus of previous work is on the discovery process techniques and process compliance. In this thesis a petri net model that incorporates resource allocation is proposed exploring the properties of this type of modeling, such as the definition of all possible states. As a model validation, it is applied in a comparative study between two resource allocation policies, one considering the expertise of each resource and other with a more generalist allocation. The arrival of new cases and the resource-activity pair execution time were estimated by Monte Carlo simulations with exponential probability distribution. Thus, for the evaluation of each policy a scoring system was used considering the progress of the process and the total execution time.
145

Uncertainty-aware dynamic reliability analysis framework for complex systems

Kabir, Sohag, Yazdi, M., Aizpurua, J.I., Papadopoulos, Y. 18 October 2019 (has links)
Yes / Critical technological systems exhibit complex dynamic characteristics such as time-dependent behavior, functional dependencies among events, sequencing and priority of causes that may alter the effects of failure. Dynamic fault trees (DFTs) have been used in the past to model the failure logic of such systems, but the quantitative analysis of DFTs has assumed the existence of precise failure data and statistical independence among events, which are unrealistic assumptions. In this paper, we propose an improved approach to reliability analysis of dynamic systems, allowing for uncertain failure data and statistical and stochastic dependencies among events. In the proposed framework, DFTs are used for dynamic failure modeling. Quantitative evaluation of DFTs is performed by converting them into generalized stochastic Petri nets. When failure data are unavailable, expert judgment and fuzzy set theory are used to obtain reasonable estimates. The approach is demonstrated on a simplified model of a cardiac assist system. / DEIS H2020 Project under Grant 732242.
146

Spiking neural P systems: matrix representation and formal verification

Gheorghe, Marian, Lefticaru, Raluca, Konur, Savas, Niculescu, I.M., Adorna, H.N. 28 April 2021 (has links)
Yes / Structural and behavioural properties of models are very important in development of complex systems and applications. In this paper, we investigate such properties for some classes of SN P systems. First, a class of SN P systems associated to a set of routing problems are investigated through their matrix representation. This allows to make certain connections amongst some of these problems. Secondly, the behavioural properties of these SN P systems are formally verified through a natural and direct mapping of these models into kP systems which are equipped with adequate formal verification methods and tools. Some examples are used to prove the effectiveness of the verification approach. / EPSRC research grant EP/R043787/1; DOST-ERDT research grants; Semirara Mining Corp; UPD-OVCRD;
147

Stochastic Petri Net Models of Service Availability in a PBNM System for Mobile Ad Hoc Networks

Bhat, Aniket Anant 15 July 2004 (has links)
Policy based network management is a promising approach for provisioning and management of quality of service in mobile ad hoc networks. In this thesis, we focus on performance evaluation of this approach in context of the amount of service received by certain nodes called policy execution points (PEPs) or policy clients from certain specialized nodes called the policy decision points (PDPs) or policy servers. We develop analytical models for the study of the system behavior under two scenarios; a simple Markovian scenario where we assume that the random variables associated with system processes follow an exponential distribution and a more complex non-Markovian scenario where we model the system processes according to general distribution functions as observed through simulation. We illustrate that the simplified Markovian model provides a reasonable indication of the trend of the service availability seen by policy clients and highlight the need for an exact analysis of the system without relying on Poisson assumptions for system processes. In the case of the more exact non-Markovian analysis, we show that our model gives a close approximation to the values obtained via empirical methods. Stochastic Petri Nets are used as performance evaluation tools in development and analysis of these system models. / Master of Science
148

Managing Changes to Service Oriented Enterprises

Akram, Mohammad Salman 07 July 2005 (has links)
In this thesis, we present a framework for managing changes in service oriented enterprises (SOEs). A service oriented enterprise outsources and composes its functionality from third-party Web service providers. We focus on changes initiated or triggered by these member Web services. We present a taxonomy of changes that occur in service oriented enterprises. We use a combination of several types of Petri nets to model the triggering changes and ensuing reactive changes. The techniques presented in our thesis are implemented in WebBIS, a prototype for composing and managing e-business Web services. Finally, we conduct an extensive simulation study to prove the feasibility of the proposed techniques. / Master of Science
149

A Verification Framework for Component Based Modeling and Simulation : “Putting the pieces together”

Mahmood, Imran January 2013 (has links)
The discipline of component-based modeling and simulation offers promising gains including reduction in development cost, time, and system complexity. This paradigm is very profitable as it promotes the use and reuse of modular components and is auspicious for effective development of complex simulations. It however is confronted by a series of research challenges when it comes to actually practice this methodology. One of such important issue is Composability verification. In modeling and simulation (M&amp;S), composability is the capability to select and assemble components in various combinations to satisfy specific user requirements. Therefore to ensure the correctness of a composed model, it is verified with respect to its requirements specifications.There are different approaches and existing component modeling frameworks that support composability however in our observation most of the component modeling frameworks possess none or weak built-in support for the composability verification. One such framework is Base Object Model (BOM) which fundamentally poses a satisfactory potential for effective model composability and reuse. However it falls short of required semantics, necessary modeling characteristics and built-in evaluation techniques, which are essential for modeling complex system behavior and reasoning about the validity of the composability at different levels.In this thesis a comprehensive verification framework is proposed to contend with some important issues in composability verification and a verification process is suggested to verify composability of different kinds of systems models, such as reactive, real-time and probabilistic systems. With an assumption that all these systems are concurrent in nature in which different composed components interact with each other simultaneously, the requirements for the extensive techniques for the structural and behavioral analysis becomes increasingly challenging. The proposed verification framework provides methods, techniques and tool support for verifying composability at its different levels. These levels are defined as foundations of a consistent model composability. Each level is discussed in detail and an approach is presented to verify composability at that level. In particular we focus on theDynamic-Semantic Composability level due to its significance in the overallcomposability correctness and also due to the level of difficulty it poses in theprocess. In order to verify composability at this level we investigate the application ofthree different approaches namely (i) Petri Nets based Algebraic Analysis (ii) ColoredPetri Nets (CPN) based State-space Analysis and (iii) Communicating SequentialProcesses based Model Checking. All the three approaches attack the problem ofverifying dynamic-semantic composability in different ways however they all sharethe same aim i.e., to confirm the correctness of a composed model with respect to itsrequirement specifications. Beside the operative integration of these approaches inour framework, we also contributed in the improvement of each approach foreffective applicability in the composability verification. Such as applying algorithmsfor automating Petri Net algebraic computations, introducing a state-space reductiontechnique in CPN based state-space analysis, and introducing function libraries toperform verification tasks and help the molder with ease of use during thecomposability verification. We also provide detailed examples of using each approachwith different models to explain the verification process and their functionality.Lastly we provide a comparison of these approaches and suggest guidelines forchoosing the right one based on the nature of the model and the availableinformation. With a right choice of an approach and following the guidelines of ourcomponent-based M&amp;S life-cycle a modeler can easily construct and verify BOMbased composed models with respect to its requirement specifications. / <p>Overseas Scholarship for PHD in selected Studies Phase II Batch I</p><p>Higher Education Commision of Pakistan.</p><p>QC 20130224</p>
150

Preserving Data Integrity in Distributed Systems

Triebel, Marvin 30 November 2018 (has links)
Informationssysteme verarbeiten Daten, die logisch und physisch über Knoten verteilt sind. Datenobjekte verschiedener Knoten können dabei Bezüge zueinander haben. Beispielsweise kann ein Datenobjekt eine Referenz auf ein Datenobjekt eines anderen Knotens oder eine kritische Information enthalten. Die Semantik der Daten induziert Datenintegrität in Form von Anforderungen: Zum Beispiel sollte keine Referenz verwaist und kritische Informationen nur an einem Knoten verfügbar sein. Datenintegrität unterscheidet gültige von ungültigen Verteilungen der Daten. Ein verteiltes System verändert sich in Schritten, die nebenläufig auftreten können. Jeder Schritt manipuliert Daten. Ein verteiltes System erhält Datenintegrität, wenn alle Schritte in einer Datenverteilung resultieren, die die Anforderungen von Datenintegrität erfüllen. Die Erhaltung von Datenintegrität ist daher ein notwendiges Korrektheitskriterium eines Systems. Der Entwurf und die Analyse von Datenintegrität in verteilten Systemen sind schwierig, weil ein verteiltes System nicht global kontrolliert werden kann. In dieser Arbeit untersuchen wir formale Methoden für die Modellierung und Analyse verteilter Systeme, die mit Daten arbeiten. Wir entwickeln die Grundlagen für die Verifikation von Systemmodellen. Dazu verwenden wir algebraische Petrinetze. Wir zeigen, dass die Schritte verteilter Systeme mit endlichen vielen Transitionen eines algebraischen Petrinetzes beschrieben werden können, genau dann, wenn eine Schranke für die Bedingungen aller Schritte existiert. Wir verwenden algebraische Gleichungen und Ungleichungen, um Datenintegrität zu spezifizieren. Wir zeigen, dass die Erhaltung von Datenintegrität unentscheidbar ist, wenn alle erreichbaren Schritte betrachtet werden. Und wir zeigen, dass die Erhaltung von Datenintegrität entscheidbar ist, wenn auch unerreichbare Schritte berücksichtigt werden. Dies zeigen wir, indem wir die Berechenbarkeit eines nicht-erhaltenden Schrittes als Zeugen zeigen. / Information systems process data that is logically and physically distributed over many locations. Data entities at different locations may be in a specific relationship. For example, a data entity at one location may contain a reference to a data entity at a different location, or a data entity may contain critical information such as a password. The semantics of data entities induce data integrity in the form of requirements. For example, no references should be dangling, and critical information should be available at only one location. Data integrity discriminates between correct and incorrect data distributions. A distributed system progresses in steps, which may occur concurrently. In each step, data is manipulated. Each data manipulation is performed locally and affects a bounded number of data entities. A distributed system preserves data integrity if each step of the system yields a data distribution that satisfies the requirements of data integrity. Preservation of data integrity is a necessary condition for the correctness of a system. Analysis and design are challenging, as distributed systems lack global control, employ different technologies, and data may accumulate unboundedly. In this thesis, we study formal methods to model and analyze distributed data-aware systems. As a result, we provide a technology-independent framework for design-time analysis. To this end, we use algebraic Petri nets. We show that there exists a bound for the conditions of each step of a distributed system if and only if the steps can be described by a finite set of transitions of an algebraic Petri net. We use algebraic equations and inequalities to specify data integrity. We show that preservation of data integrity is undecidable in case we consider all reachable steps. We show that preservation of data integrity is decidable in case we also include unreachable steps. We show the latter by showing computability of a non-preserving step as a witness.

Page generated in 0.0295 seconds