• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 8
  • 3
  • 2
  • 1
  • Tagged with
  • 15
  • 15
  • 15
  • 5
  • 5
  • 4
  • 4
  • 4
  • 4
  • 4
  • 3
  • 3
  • 3
  • 3
  • 3
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
11

A framework for availability, performance and survivability evaluation of disaster tolerant cloud computing systems

SILVA, Bruno 26 February 2016 (has links)
Submitted by Fabio Sobreira Campos da Costa (fabio.sobreira@ufpe.br) on 2016-10-31T13:02:48Z No. of bitstreams: 2 license_rdf: 1232 bytes, checksum: 66e71c371cc565284e70f40736c94386 (MD5) Bruno_Silva_Doutorado_Ciencia_da_Computacao_2016.pdf: 7350049 bytes, checksum: f6bc77a5446b293d932df5ac54dad560 (MD5) / Made available in DSpace on 2016-10-31T13:02:48Z (GMT). No. of bitstreams: 2 license_rdf: 1232 bytes, checksum: 66e71c371cc565284e70f40736c94386 (MD5) Bruno_Silva_Doutorado_Ciencia_da_Computacao_2016.pdf: 7350049 bytes, checksum: f6bc77a5446b293d932df5ac54dad560 (MD5) Previous issue date: 2016-02-26 / CNPq / Cloud Computing Systems (CCSs) allow the utilization of application services for users around the world. An important challenge for CCS providers is to supply a high-quality service even when there are failures, overloads, and disasters. A Service Level Agreement (SLA) is often established between providers and clients to define the availability, performance and security requirements of such services. Fines may be imposed on providers if SLA’s quality parameters are not met. A widely adopted strategy to increase CCS availability and mitigate the effects of disasters corresponds to the utilization of redundant subsystems and the adoption of geographically distributed data centers. Considering this approach, services of affected data centers can be transferred to operational data centers of the same CCS. However, the data center synchronization time increases with the distance, which may affect system performance. Additionally, resources over-provisioning may affect the service profitability, given the high costs of redundant subsystems. Therefore, an assessment that include performance, availability, possibility of disasters and data center allocation is of utmost importance for CCS projects. This work presents a framework for geographically distributed CCS evaluation that estimates metrics related to performance, availability and disaster recovery (man-made or natural disasters). The proposed framework is composed of an evaluation process, a set of models, evaluation tool, and fault injection tool. The evaluation process helps designers to represent CCS systems and obtain the desired metrics. This process adopts a formal hybrid modeling, which contemplates CCS high-level models, stochastic Petri nets (SPN) and reliability block diagrams (RBD) for representing and evaluating CCS subsystems. An evaluation tool is proposed (GeoClouds Modcs) to allow easy representation and evaluation of cloud computing systems. Finally, a fault injection tool for CCSs (Eucabomber 2.0) is presented to estimate availability metrics and validate the proposed models. Several case studies are presented and analyze survivability, performance and availability metrics considering multiple data center allocation scenarios for CCS systems. / Sistemas de Computação em Nuvem (SCNs) permitem a utilização de aplicações como serviços para usuários em todo o mundo. Um importante desafio para provedores de SCN corresponde ao fornecimento de serviços de qualidade mesmo na presença de eventuais falhas, sobrecargas e desastres. Geralmente, um acordo de nível de serviço (ANS) é estabelecido entre fornecedores e clientes para definição dos requisitos de disponibilidade, desempenho e segurança de tais serviços. Caso os parâmetros de qualidade definidos no ANS não sejam satisfeitos, multas podem ser aplicadas aos provedores. Nesse contexto, uma estratégia para aumentar a disponibilidade de SCNs e mitigar os efeitos de eventuais desastres consiste em utilizar subsistemas redundantes e adotar de centros de dados distribuídos geograficamente. Considerando-se esta abordagem, os serviços de centros de dados afetados podem ser transferidos para outros centros de dados do mesmo SCN. Contudo, o tempo de sincronização entre os diferentes centros de dados aumenta com a distância entre os mesmos, o que pode afetar a performance do sistema. Além disso, o provisionamento excessivo de recursos pode afetar a rentabilidade do serviço, dado o alto custo dos subsistemas redundantes. Portanto, uma avaliação que contemple desempenho, disponibilidade, possibilidade de desastres e alocação de centro de dados é de fundamental importância para o projeto de SCNs. Este trabalho apresenta um framework para avaliação de SCNs distribuídos geograficamente que permite a estimativa de métricas de desempenho, disponibilidade e capacidade de recuperação de desastres (naturais ou causados pelo homem). O framework é composto de um processo de avaliação, conjunto de modelos, ferramenta de avaliação e ferramenta de injeção de falhas. O processo de avaliação apresentado pode auxiliar projetistas de SCNs desde a representação do sistem de computação em nuvem até a obtenção das métricas de interesse. Este processo utiliza uma modelagem formal híbrida, que contempla modelos de SCN de alto nível, redes de Petri estocásticas (RPEs) e diagramas de bloco de confiabilidade (DBCs) para representação e avaliação de SCNs e seus subsistemas. Uma ferramenta de avaliação é proposta (GeoClouds Modcs) que permite fácil representação e avaliação de sistemas de computação em nuvem. Por fim, uma ferramenta de injeção de falhas em SCN (Eucabomber 2.0) é apresentada para estimar métricas de disponibilidade e validar os modelos propostos. Vários estudos de caso são apresentados e estes analisam a capacidade de recuperação de desastres, desempenho e disponibilidade de SCNs distribuídos geograficamente.
12

Mathematical Formula Recognition and Automatic Detection and Translation of Algorithmic Components into Stochastic Petri Nets in Scientific Documents

Kostalia, Elisavet Elli January 2021 (has links)
No description available.
13

Contribution à l'évaluation de sûreté de fonctionnement des architectures de surveillance/diagnostic embarquées. Application au transport ferroviaire / Contribution to embedded monitoring/diagnosis architectures dependability assesment. Application to the railway transport

Gandibleux, Jean 06 December 2013 (has links)
Dans le transport ferroviaire, le coût et la disponibilité du matériel roulant sont des questions majeures. Pour optimiser le coût de maintenance du système de transport ferroviaire, une solution consiste à mieux détecter et diagnostiquer les défaillances. Actuellement, les architectures de surveillance/diagnostic centralisées atteignent leurs limites et imposent d'innover. Cette innovation technologique peut se matérialiser par la mise en oeuvre d’architectures embarquées de surveillance/diagnostic distribuées et communicantes afin de détecter et localiser plus rapidement les défaillances et de les valider dans le contexte opérationnel du train. Les présents travaux de doctorat, menés dans le cadre du FUI SURFER (SURveillance active Ferroviaire) coordonné par Bombardier, visent à proposer une démarche méthodologique d’évaluation de la sûreté de fonctionnement d’architectures de surveillance/diagnostic. Pour ce faire, une caractérisation et une modélisation génériques des architectures de surveillance/diagnostic basée sur le formalisme des Réseaux de Petri stochastiques ont été proposées. Ces modèles génériques intègrent les réseaux de communication (et les modes de défaillances associés) qui constituent un point dur des architectures de surveillance/diagnostic retenues. Les modèles proposés ont été implantés et validés théoriquement par simulation et une étude de sensibilité de ces architectures de surveillance/diagnostic à certains paramètres influents a été menée. Enfin, ces modèles génériques sont appliqués sur un cas réel du domaine ferroviaire, les systèmes accès voyageurs des trains, qui sont critiques en matière de disponibilité et diagnosticabilité. / In the railway transport, rolling stock cost and availability are major concern. To optimise the maintenance cost of the railway transport system, one solution consists in better detecting and diagnosing failures. Today, centralized monitoring/diagnosis architectures reach their limits. Innovation is therefore necessary. This technological innovation may be implemented with embedded distributed and communicating monitoring/diagnosis architectures in order to faster detect and localize failures and to make a validation with respect to the train operational context.The present research work, carried out as part of the SURFER FUI project (french acronym standing for railway active monitoring) lead by Bombardier, aim to propose a methodology to assess dependability of monitoring/diagnosis architectures. To this end, a caracterisation et une modélisation génériques des monitoring/diagnosis architectures based on the stochastic Petri Nets have been proposed. These generic models take into account communication networks (and the associated failure modes), which constitutes a central point of the studied monitoring/diagnosis architectures. The proposed models have been edited and theoretically validated by simulation. A sensitiveness of the monitoring/diagnosis architectures to parameters has been studied. Finally, these generic models have applied to a real case of the railway transport, train passenger access systems, which are critical in term of availability and diagnosability.
14

Probabilistic Estimation of Unobserved Process Events

Rogge-Solti, Andreas January 2014 (has links)
Organizations try to gain competitive advantages, and to increase customer satisfaction. To ensure the quality and efficiency of their business processes, they perform business process management. An important part of process management that happens on the daily operational level is process controlling. A prerequisite of controlling is process monitoring, i.e., keeping track of the performed activities in running process instances. Only by process monitoring can business analysts detect delays and react to deviations from the expected or guaranteed performance of a process instance. To enable monitoring, process events need to be collected from the process environment. When a business process is orchestrated by a process execution engine, monitoring is available for all orchestrated process activities. Many business processes, however, do not lend themselves to automatic orchestration, e.g., because of required freedom of action. This situation is often encountered in hospitals, where most business processes are manually enacted. Hence, in practice it is often inefficient or infeasible to document and monitor every process activity. Additionally, manual process execution and documentation is prone to errors, e.g., documentation of activities can be forgotten. Thus, organizations face the challenge of process events that occur, but are not observed by the monitoring environment. These unobserved process events can serve as basis for operational process decisions, even without exact knowledge of when they happened or when they will happen. An exemplary decision is whether to invest more resources to manage timely completion of a case, anticipating that the process end event will occur too late. This thesis offers means to reason about unobserved process events in a probabilistic way. We address decisive questions of process managers (e.g., "when will the case be finished?", or "when did we perform the activity that we forgot to document?") in this thesis. As main contribution, we introduce an advanced probabilistic model to business process management that is based on a stochastic variant of Petri nets. We present a holistic approach to use the model effectively along the business process lifecycle. Therefore, we provide techniques to discover such models from historical observations, to predict the termination time of processes, and to ensure quality by missing data management. We propose mechanisms to optimize configuration for monitoring and prediction, i.e., to offer guidance in selecting important activities to monitor. An implementation is provided as a proof of concept. For evaluation, we compare the accuracy of the approach with that of state-of-the-art approaches using real process data of a hospital. Additionally, we show its more general applicability in other domains by applying the approach on process data from logistics and finance. / Unternehmen versuchen Wettbewerbsvorteile zu gewinnen und die Kundenzufriedenheit zu erhöhen. Um die Qualität und die Effizienz ihrer Prozesse zu gewährleisten, wenden Unternehmen Geschäftsprozessmanagement an. Hierbei spielt die Prozesskontrolle im täglichen Betrieb eine wichtige Rolle. Prozesskontrolle wird durch Prozessmonitoring ermöglicht, d.h. durch die Überwachung des Prozessfortschritts laufender Prozessinstanzen. So können Verzögerungen entdeckt und es kann entsprechend reagiert werden, um Prozesse wie erwartet und termingerecht beenden zu können. Um Prozessmonitoring zu ermöglichen, müssen prozessrelevante Ereignisse aus der Prozessumgebung gesammelt und ausgewertet werden. Sofern eine Prozessausführungsengine die Orchestrierung von Geschäftsprozessen übernimmt, kann jede Prozessaktivität überwacht werden. Aber viele Geschäftsprozesse eignen sich nicht für automatisierte Orchestrierung, da sie z.B. besonders viel Handlungsfreiheit erfordern. Dies ist in Krankenhäusern der Fall, in denen Geschäftsprozesse oft manuell durchgeführt werden. Daher ist es meist umständlich oder unmöglich, jeden Prozessfortschritt zu erfassen. Zudem ist händische Prozessausführung und -dokumentation fehleranfällig, so wird z.B. manchmal vergessen zu dokumentieren. Eine Herausforderung für Unternehmen ist, dass manche Prozessereignisse nicht im Prozessmonitoring erfasst werden. Solch unbeobachtete Prozessereignisse können jedoch als Entscheidungsgrundlage dienen, selbst wenn kein exaktes Wissen über den Zeitpunkt ihres Auftretens vorliegt. Zum Beispiel ist bei der Prozesskontrolle zu entscheiden, ob zusätzliche Ressourcen eingesetzt werden sollen, wenn eine Verspätung angenommen wird. Diese Arbeit stellt einen probabilistischen Ansatz für den Umgang mit unbeobachteten Prozessereignissen vor. Dabei werden entscheidende Fragen von Prozessmanagern beantwortet (z.B. "Wann werden wir den Fall beenden?", oder "Wann wurde die Aktivität ausgeführt, die nicht dokumentiert wurde?"). Der Hauptbeitrag der Arbeit ist die Einführung eines erweiterten probabilistischen Modells ins Geschäftsprozessmanagement, das auf stochastischen Petri Netzen basiert. Dabei wird ein ganzheitlicher Ansatz zur Unterstützung der einzelnen Phasen des Geschäftsprozesslebenszyklus verfolgt. Es werden Techniken zum Lernen des probabilistischen Modells, zum Vorhersagen des Zeitpunkts des Prozessendes, zum Qualitätsmanagement von Dokumentationen durch Erkennung fehlender Einträge, und zur Optimierung von Monitoringkonfigurationen bereitgestellt. Letztere dient zur Auswahl von relevanten Stellen im Prozess, die beobachtet werden sollten. Diese Techniken wurden in einer quelloffenen prototypischen Anwendung implementiert. Zur Evaluierung wird der Ansatz mit existierenden Alternativen an echten Prozessdaten eines Krankenhauses gemessen. Die generelle Anwendbarkeit in weiteren Domänen wird examplarisch an Prozessdaten aus der Logistik und dem Finanzwesen gezeigt.
15

Contribution au diagnostic et pronostic des systèmes à évènements discrets temporisés par réseaux de Petri stochastiques / Contribution to fault diagnosis and prognosis of timed discrete event systems using stochastic Petri nets

Ammour, Rabah 11 December 2017 (has links)
La complexification des systèmes et la réduction du nombre de capteurs nécessitent l’élaboration de méthodes de surveillance de plus en plus efficaces. Le travail de cette thèse s’inscrit dans ce contexte et porte sur le diagnostic et le pronostic des Systèmes à Événements Discrets (SED) temporisés. Les réseaux de Petri stochastiques partiellement mesurés sont utilisés pour modéliser le système. Le modèle représente à la fois le comportement nominal et le comportement dysfonctionnel du système. Il permet aussi de représenter ses capteurs à travers une mesure partielle des transitions et des places. Notre contribution porte sur l’exploitation de l’information temporelle pour le diagnostic et le pronostic des SED. À partir d’une suite de mesures datées, les comportements du système qui expliqueraient ces mesures sont d’abord déterminés. La probabilité de ces comportements est ensuite évaluée pour fournir un diagnostic du système en termes de probabilité d’occurrence d’un défaut. Dans le cas où une faute est diagnostiquée, une approche permettant d’estimer la distribution de sa date d’occurrence est proposée. L’objectif est de donner plus de détails sur cette faute afin de mieux la caractériser. Par ailleurs, la probabilité des comportements compatibles est exploitée pour estimer l’état actuel du système. Il s’agit de déterminer les marquages compatibles avec les mesures ainsi que leurs probabilités associées. À partir de cette estimation d’état, la prise en considération des évolutions possibles du système permet d’envisager la prédiction de la faute avant son occurrence. Une estimation de la probabilité d’occurrence de la faute sur un horizon de temps futur est ainsi obtenue. Celle-ci est ensuite étendue à l’évaluation de la durée de vie résiduelle du système. Enfin, une application des différentes approches développées sur un cas d’un système de tri est proposée. / Due to the increasing complexity of systems and to the limitation of sensors number, developing monitoring methods is a main issue. This PhD thesis deals with the fault diagnosis and prognosis of timed Discrete Event Systems (DES). For that purpose, partially observed stochastic Petri nets are used to model the system. The model represents both the nominal and faulty behaviors of the system and characterizes the uncertainty on the occurrence of events as random variables with exponential distributions. It also considers partial measurements of both markings and events to represent the sensors of the system. Our main contribution is to exploit the timed information, namely the dates of the measurements for the fault diagnosis and prognosis of DES. From the proposed model and collected measurements, the behaviors of the system that are consistent with those measurements are obtained. Based on the event dates, our approach consists in evaluating the probabilities of the consistent behaviors. The probability of faults occurrences is obtained as a consequence. When a fault is detected, a method to estimate its occurrence date is proposed. From the probability of the consistent trajectories, a state estimation is deduced. The future possible behaviors of the system, from the current state, are considered in order to achieve fault prediction. This prognosis result is extended to estimate the remaining useful life as a time interval. Finally, a case study representing a sorting system is proposed to show the applicability of the developed methods.

Page generated in 0.0527 seconds