1 |
Probabilistic Fault Management in Networked SystemsSteinert, Rebecca January 2014 (has links)
Technical advances in network communication systems (e.g. radio access networks) combined with evolving concepts based on virtualization (e.g. clouds), require new management algorithms in order to handle the increasing complexity in the network behavior and variability in the network environment. Current network management operations are primarily centralized and deterministic, and are carried out via automated scripts and manual interventions, which work for mid-sized and fairly static networks. The next generation of communication networks and systems will be of significantly larger size and complexity, and will require scalable and autonomous management algorithms in order to meet operational requirements on reliability, failure resilience, and resource-efficiency. A promising approach to address these challenges includes the development of probabilistic management algorithms, following three main design goals. The first goal relates to all aspects of scalability, ranging from efficient usage of network resources to computational efficiency. The second goal relates to adaptability in maintaining the models up-to-date for the purpose of accurately reflecting the network state. The third goal relates to reliability in the algorithm performance in the sense of improved performance predictability and simplified algorithm control. This thesis is about probabilistic approaches to fault management that follow the concepts of probabilistic network management (PNM). An overview of existing network management algorithms and methods in relation to PNM is provided. The concepts of PNM and the implications of employing PNM-algorithms are presented and discussed. Moreover, some of the practical differences of using a probabilistic fault detection algorithm compared to a deterministic method are investigated. Further, six probabilistic fault management algorithms that implement different aspects of PNM are presented. The algorithms are highly decentralized, adaptive and autonomous, and cover several problem areas, such as probabilistic fault detection and controllable detection performance; distributed and decentralized change detection in modeled link metrics; root-cause analysis in virtual overlays; event-correlation and pattern mining in data logs; and, probabilistic failure diagnosis. The probabilistic models (for a large part based on Bayesian parameter estimation) are memory-efficient and can be used and re-used for multiple purposes, such as performance monitoring, detection, and self-adjustment of the algorithm behavior. / <p>QC 20140509</p>
|
2 |
La responsabilité des associés des sociétés commercialesTruong, Thuong 17 November 2017 (has links)
La responsabilité des associés est une notion peu évoquée dans les enseignements du droit des sociétés. Dans les sociétés in bonis, et s’agissant de rapports externes, la responsabilité personnelle des associés pourrait être engagée pour faute détachable. Mais le principe d’un engagement de la responsabilité personnelle des associés, dans les rapports avec des tiers, est contesté, compte tenu du caractère essentiellement interne de leur activité. Dans une procédure collective, la non-responsabilité de la mère des actes de sa filiale, est contestée. Le développement de cette contestation est favorisé grâce à des armes de poursuite efficaces de l’arsenal répressif, des armes à utiliser dans un environnement hautement dérogatoire des procédures collectives. On constate une certaine tendance aggravante de la responsabilité de la mère, notamment dans le domaine social et dans l’environnement. La recherche d’une meilleure protection des victimes, pousse le législateur à légiférer dans des ilots en difficulté, distillant le caractère irréversible des solutions partielles et spécifiques, et forçant de ce fait, le passage vers l’instauration d’une présomption de responsabilité de la mère des actes de sa filiale. Pourtant, un arsenal répressif important et efficace existe, et des pistes permettent d’adoucir la responsabilité de la mère tout en la faisant participer aux difficultés de sa filiale. / The liability of shareholders is a notion not often referred to in coursebook. In in bonis companies, and in the case of external relationships, the shareholder personal liability could be engaged for ‘ fault detachable’. However, the principle of a commitment to personal liability on the part of shareholders in relations with third parties is contested, due to the essentially internal nature of their activity In a collective procedure, the non-liability of the parent company for the acts of their subsidiary is challenged. The development of this challenge is facilitated by powerful weapons of the repressive arsenal, weapons to be used in a highly derogatory environment of collective procedures. There is an aggravating trend in the parent company responsibility, particularly in regard to social and environmental domains. The search for a better protection of the victims pushes the legislator to legislate on isolated problematic issues, distilling the irreversible character of partial and specific solutions, and thus forcing the passage towards the establishment of a presumption of liability of the parent company for the acts of their subsidiary. However, there is a large and effective repressive arsenal, and there are avenues to limit the parent company’s liability while involving them in their subsidiary problems.
|
3 |
Minimizing Overhead for Fault Tolerance in Event Stream Processing SystemsMartin, André 17 December 2015 (has links)
Event Stream Processing (ESP) is a well-established approach for low-latency data processing enabling users to quickly react to relevant situations in soft real-time. In order to cope with the sheer amount of data being generated each day and to cope with fluctuating workloads originating from data sources such as Twitter and Facebook, such systems must be highly scalable and elastic. Hence, ESP systems are typically long running applications deployed on several hundreds of nodes in either dedicated data-centers or cloud environments such as Amazon EC2. In such environments, nodes are likely to fail due to software aging, process or hardware errors whereas the unbounded stream of data asks for continuous processing.
In order to cope with node failures, several fault tolerance approaches have been proposed in literature. Active replication and rollback recovery-based on checkpointing and in-memory logging (upstream backup) are two commonly used approaches in order to cope with such failures in the context of ESP systems. However, these approaches suffer either from a high resource footprint, low throughput or unresponsiveness due to long recovery times. Moreover, in order to recover applications in a precise manner using exactly once semantics, the use of deterministic execution is required which adds another layer of complexity and overhead.
The goal of this thesis is to lower the overhead for fault tolerance in ESP systems. We first present StreamMine3G, our ESP system we built entirely from scratch in order to study and evaluate novel approaches for fault tolerance and elasticity. We then present an approach to reduce the overhead of deterministic execution by using a weak, epoch-based rather than strict ordering scheme for commutative and tumbling windowed operators that allows applications to recover precisely using active or passive replication. Since most applications are running in cloud environments nowadays, we furthermore propose an approach to increase the system availability by efficiently utilizing spare but paid resources for fault tolerance. Finally, in order to free users from the burden of choosing the correct fault tolerance scheme for their applications that guarantees the desired recovery time while still saving resources, we present a controller-based approach that adapts fault tolerance at runtime. We furthermore showcase the applicability of our StreamMine3G approach using real world applications and examples.
|
Page generated in 0.0896 seconds