• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 63
  • 25
  • 21
  • 4
  • 3
  • 1
  • 1
  • Tagged with
  • 133
  • 133
  • 35
  • 34
  • 32
  • 31
  • 29
  • 22
  • 22
  • 21
  • 20
  • 20
  • 18
  • 17
  • 16
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
61

Uma Abordagem Autonômica para Tolerância a Falhas na Execução de Aplicações em Desktop Grids / An Autonomic Approach to Fault Tolerance in Running Applications on Desktop Grids

Viana, Antonio Eduardo Bernardes 05 September 2011 (has links)
Made available in DSpace on 2016-08-17T14:53:19Z (GMT). No. of bitstreams: 1 Antonio Eduardo Bernardes Viana.pdf: 1275198 bytes, checksum: 77012d28ed5d52f89b69093e99e04279 (MD5) Previous issue date: 2011-09-05 / Computers grids are characterized by the high dynamism of its execution environment, resources and applications heterogeneity, and the requirement for high scalability. These features turn tasks such as configuration, maintenance and recovery of failed applications quite challenging and is becoming increasingly difficult to perform them only by human agents. The autonomic computing paradigm denotes computer systems capable of changing their behavior dynamically in response to changes in the execution environment. For achieving this, the software is generally organized following the MAPE-K (Monitoring, Analysis, Planning, Execution and Knowledge) model, in which managers perform the execution environment sensing activities, context analysis, planning and execution of dynamic reconfiguration actions, based on shared knowledge about the controlled system. In this work we present an autonomic mechanism based on the MAPE-K model to provide fault tolerance for applications running on computer grids, which is capable of monitoring the execution environment and, based on the evaluation of the collected data, to decide which reconfiguration actions must eventually be applied to the fault tolerance mechanism in order to keep the system in balance with the goals of minimizing the applications average completion time and to provide a high success rate in completing their tasks. This paper also describes the performance evaluation of the proposed autonomic mechanism, accomplished through the use of simulation techniques that took into account several opportunistic desktop grids typical environmental scenarios. / Grades de computadores são caracterizadas pelo alto dinamismo de seu ambiente de execução, alta heterogeneidade de recursos e tarefas e por requererem grande escalabilidade. Essas características tornam tarefas como configuração, manutenção e recuperação da execução de aplicações em caso de falhas bastante desafiadoras e cada vez mais difíceis de serem realizadas exclusivamente por agentes humanos. A computação autonômica denota sistemas computacionais capazes de mudar seu comportamento dinamicamente em resposta a variações do ambiente de execução. Para isso, o software é geralmente organizado seguindo-se o modelo MAPE-K (Monitoring, Analysis, Planning, Execution and Knowledge), no qual gerentes autonômicos realizam as atividades de sensoriamento do ambiente de execução, análise de contexto, planejamento e execução de ações de reconfiguração dinâmica, compartilhando algum conhecimento sobre o sistema controlado. Nesse trabalho apresentamos um mecanismo autonômico baseado no modelo MAPE-K para prover tolerância a falhas na execução de aplicações em grades de computadores capaz de monitorar o ambiente de execução e, a partir da avaliação dos dados coletados, decidir quais ações de reconfiguração devem eventualmente ser aplicadas ao mecanismo de tolerância falhas para manter o sistema em equilíbrio com os objetivos de minimizar o tempo médio de conclusão das aplicações e prover alta taxa de sucesso na conclusão de suas tarefas. Este trabalho descreve ainda a avaliação de desempenho do mecanismo autonômico proposto, realizada através do uso técnicas de simulação e que levou em consideração aos diversos cenários típicos de ambientes de desktop grids oportunistas.
62

An Anomaly Behavior Analysis Methodology for the Internet of Things: Design, Analysis, and Evaluation

Pacheco Ramirez, Jesus Horacio, Pacheco Ramirez, Jesus Horacio January 2017 (has links)
Advances in mobile and pervasive computing, social network technologies and the exponential growth in Internet applications and services will lead to the development of the Internet of Things (IoT). The IoT services will be a key enabling technology to the development of smart infrastructures that will revolutionize the way we do business, manage critical services, and how we secure, protect, and entertain ourselves. Large-scale IoT applications, such as critical infrastructures (e.g., smart grid, smart transportation, smart buildings, etc.) are distributed systems, characterized by interdependence, cooperation, competition, and adaptation. The integration of IoT premises with sensors, actuators, and control devices allows smart infrastructures to achieve reliable and efficient operations, and to significantly reduce operational costs. However, with the use of IoT, we are experiencing grand challenges to secure and protect such advanced information services due to the significant increase in the attack surface. The interconnections between a growing number of devices expose the vulnerability of IoT applications to attackers. Even devices which are intended to operate in isolation are sometimes connected to the Internet due to careless configuration or to satisfy special needs (e.g., they need to be remotely managed). The security challenge consists of identifying accurately IoT devices, promptly detect vulnerabilities and exploitations of IoT devices, and stop or mitigate the impact of cyberattacks. An Intrusion Detection System (IDS) is in charge of monitoring the behavior of protected systems and is looking for malicious activities or policy violations in order to produce reports to a management station or even perform proactive countermeasures against the detected threat. Anomaly behavior detection is a technique that aims at creating models for the normal behavior of the network and detects any significant deviation from normal operations. With the ability to detect new and novel attacks, the anomaly detection is a promising IDS technique that is actively pursued by researchers. Since each IoT application has its own specification, it is hard to develop a single IDS which works properly for all IoT layers. A better approach is to design customized intrusion detection engines for different layers and then aggregate the analysis results from these engines. On the other hand, it would be cumbersome and takes a lot of effort and knowledge to manually extract the specification of each system. So it will be appropriate to formulate our methodology based on machine learning techniques which can be applied to produce efficient detection engines for different IoT applications. In this dissertation we aim at formalizing a general methodology to perform anomaly behavior analysis for IoT. We first introduce our IoT architecture for smart infrastructures that consists of four layers: end nodes (devices), communications, services, and application. Then we show our multilayer IoT security framework and IoT architecture that consists of five planes: function specification or model plane, attack surface plane, impact plane, mitigation plane, and priority plane. We then present a methodology to develop a general threat model in order to recognize the vulnerabilities in each layer and the possible countermeasures that can be deployed to mitigate their exploitation. In this scope, we show how to develop and deploy an anomaly behavior analysis based intrusion detection system (ABA-IDS) to detect anomalies that might be triggered by attacks against devices, protocols, information or services in our IoT framework. We have evaluated our approach by launching several cyberattacks (e.g. Sensor Impersonation, Replay, and Flooding attacks) against our testbeds developed at the University of Arizona Center for Cloud and Autonomic Computing. The results show that our approach can be used to deploy effective security mechanisms to protect the normal operations of smart infrastructures integrated to the IoT. Moreover, our approach can detect known and unknown attacks against IoT with high detection rate and low false alarms.
63

Abstractions to Support Dynamic Adaptation of Communication Frameworks for User-Centric Communication

Allen, Andrew A 29 March 2011 (has links)
The convergence of data, audio and video on IP networks is changing the way individuals, groups and organizations communicate. This diversity of communication media presents opportunities for creating synergistic collaborative communications. This form of collaborative communication is however not without its challenges. The increasing number of communication service providers coupled with a combinatorial mix of offered services, varying Quality-of-Service and oscillating pricing of services increases the complexity for the user to manage and maintain `always best' priced or performance services. Consumers have to manually manage and adapt their communication in line with differences in services across devices, networks and media while ensuring that the usage remain consistent with their intended goals. This dissertation proposes a novel user-centric approach to address this problem. The proposed approach aims to reduce the aforementioned complexity to the user by (1) providing high-level abstractions and a policy based methodology for automated selection of the communication services guided by high-level user policies and (2) providing services through the seamless integration of multiple communication service providers and providing an extensible framework to support the integration of multiple communication service providers. The approach was implemented in the Communication Virtual Machine (CVM), a model-driven technology for realizing communication applications. The CVM includes the Network Communication Broker, the layer responsible for providing a network-independent API to the upper layers of CVM. The initial prototype for the NCB supported only a single communication framework which limited the number, quality and types of services available. Experimental evaluation of the approach show the additional overhead of the approach is minimal compared to the individual communication services frameworks. Additionally the automated approach proposed out performed the individual communication services frameworks for cross framework switching.
64

Optimization of autonomic resources for the management of service-based business processes in the Cloud / Optimisation des ressources autonomiques pour la gestion des processus métier à base de services dans le Cloud

Hadded, Leila 06 October 2018 (has links)
Le Cloud Computing est un nouveau paradigme qui fournit des ressources informatiques sous forme de services à la demande via internet fondé sur le modèle de facturation pay-per-use. Il est de plus en plus utilisé pour le déploiement et l’exécution des processus métier en général et des processus métier à base de services (SBPs) en particulier. Les environnements cloud sont généralement très dynamiques. À cet effet, il devient indispensable de s’appuyer sur des agents intelligents appelés gestionnaires autonomiques (AMs), qui permettent de rendre les SBPs capables de se gérer de façon autonome afin de faire face aux changements dynamiques induits parle cloud. Cependant, les solutions existantes sont limitées à l’utilisation soit d’un AM centralisé, soit d’un AM par service pour gérer un SBP. Il est évident que la deuxième solution représente un gaspillage d’AMs et peut conduire à la prise de décisions de gestion contradictoires, tandis que la première solution peut conduire à des goulots d’étranglement au niveau de la gestion du SBP. Par conséquent, il est essentiel de trouver le nombre optimal d’AMs qui seront utilisés pour gérer un SBP afin de minimiser leur nombre tout en évitant les goulots d’étranglement. De plus, en raison de l’hétérogénéité des ressources cloud et de la diversité de la qualité de service (QoS) requise par les SBPs, l’allocation des ressources cloud pour ces AMs peut entraîner des coûts de calcul et de communication élevés et/ou une QoS inférieure à celle exigée. Pour cela, il est également essentiel de trouver l’allocation optimale des ressources cloud pour les AMs qui seront utilisés pour gérer un SBP afin de minimiser les coûts tout en maintenant les exigences de QoS. Dans ce travail, nous proposons un modèle d’optimisation déterministe pour chacun de ces deux problèmes. En outre, en raison du temps nécessaire pour résoudre ces problèmes qui croît de manière exponentielle avec la taille du problème, nous proposons des algorithmes quasi-optimaux qui permettent d’obtenir de bonnes solutions dans un temps raisonnable / Cloud Computing is a new paradigm that provides computing resources as a service over the internet in a pay-per-use model. It is increasingly used for hosting and executing business processes in general and service-based business processes (SBPs) in particular. Cloud environments are usually highly dynamic. Hence, executing these SBPs requires autonomic management to cope with the changes of cloud environments implies the usage of a number of controlling devices, referred to as Autonomic Managers (AMs). However, existing solutions are limited to use either a centralized AM or an AM per service for managing a whole SBP. It is obvious that the latter solution is resource consuming and may lead to conflicting management decisions, while the former one may lead to management bottlenecks. An important problem in this context, deals with finding the optimal number of AMs for the management of an SBP, minimizing costs in terms of number of AMs while at the same time avoiding management bottlenecks and ensuring good management performance. Moreover, due to the heterogeneity of cloud resources and the diversity of the required quality of service (QoS) of SBPs, the allocation of cloud resources to these AMs may result in high computing costs and an increase in the communication overheads and/or lower QoS. It is also crucial to find an optimal allocation of cloud resources to the AMs, minimizing costs while at the same time maintaining the QoS requirements. To address these challenges, in this work, we propose a deterministic optimization model for each problem. Furthermore, due to the amount of time needed to solve these problems that grows exponentially with the size of the problem, we propose near-optimal algorithms that provide good solutions in reasonable time
65

Towards Change Propagating Test Models In Autonomic and Adaptive Systems

Akour, Mohammed Abd Alwahab January 2012 (has links)
The major motivation for self-adaptive computing systems is the self-adjustment of the software according to a changing environment. Adaptive computing systems can add, remove, and replace their own components in response to changes in the system itself and in the operating environment of a software system. Although these systems may provide a certain degree of confidence against new environments, their structural and behavioral changes should be validated after adaptation occurs at runtime. Testing dynamically adaptive systems is extremely challenging because both the structure and behavior of the system may change during its execution. After self adaptation occurs in autonomic software, new components may be integrated to the software system. When new components are incorporated, testing them becomes vital phase for ensuring that they will interact and behave as expected. When self adaptation is about removing existing components, a predefined test set may no longer be applicable due to changes in the program structure. Investigating techniques for dynamically updating regression tests after adaptation is therefore necessary to ensure such approaches can be applied in practice. We propose a model-driven approach that is based on change propagation for synchronizing a runtime test model for a software system with the model of its component structure after dynamic adaptation. A workflow and meta-model to support the approach was provided, referred to as Test Information Propagation (TIP). To demonstrate TIP, a prototype was developed that simulates a reductive and additive change to an autonomic, service-oriented healthcare application. To demonstrate the generalization of our TIP approach to be instantiated into the domain of up-to-date runtime testing for self-adaptive software systems, the TIP approach was applied to the self-adaptive JPacman 3.0 system. To measure the accuracy of the TIP engine, we consider and compare the work of a developer who manually identifyied changes that should be performed to update the test model after self-adaptation occurs in self-adaptive systems in our study. The experiments show how TIP is highly accurate for reductive change propagation across self-adaptive systems. Promising results have been achieved in simulating the additive changes as well.
66

Autonomic test case generation of failing code using AOP

Murguia, Giovanni 02 September 2020 (has links)
As software systems have grown in size and complexity, the costs of maintaining such systems increases steadily. In the early 2000's, IBM launched the autonomic computing initiative to mitigate this problem by injecting feedback control mechanisms into software systems to enable them to observe their health and self-heal without human intervention and thereby cope with certain changes in their requirements and environments. Self-healing is one of several fundamental challenges addressed and includes software systems that are able to recover from failure conditions. There has been considerable research on software architectures with feedback loops that allow a multi-component system to adjust certain parameters automatically in response to changes in its environment. However, modifying the components' source code in response to failures remains an open and formidable challenge. Automatic program repair techniques aim to create and apply source code patches autonomously. These techniques have evolved over the years to take advantage of advancements in programming languages, such as reflection. However, these techniques require mechanisms to evaluate if a candidate patch solves the failure condition. Some rely on test cases that capture the context under which the program failed---the patch applied can then be considered as a successful patch if the test result changes from failing to passing. Although test cases are an effective mechanism to govern the applicability of potential patches, the automatic generation of test cases for a given scenario has not received much attention. ReCrash represents the only known implementation to generate test cases automatically with promising results through the use of low-level instrumentation libraries. The work reported in this thesis aims to explore this area further and under a different light. It proposes the use of Aspect-Oriented Programming (AOP)---and in particular of AspectJ---as a higher-level paradigm to express the code elements on which monitoring actions can be interleaved with the source code, to create a representation of the context at the most relevant moments of the execution, so that if the code fails, the contextual representation is retained and used at a later time to automatically write a test case. By doing this, the author intends to contribute to fill the gap that prevents the use of automatic program repair techniques in a self-healing architecture. The prototype implementation engineered as part of this research was evaluated along three dimensions: memory usage, execution time and binary size. The evaluation results suggest that (1) AspectJ introduces significant overhead with respect to execution time, (2) the implementation algorithm causes a tremendous strain on garbage collection, and (3) AspectJ incorporates tens of additional lines of code, which account for a mean size increase to every binary file of a factor of ten compared to the original size. The comparative analysis with ReCrash shows that the algorithm and data structures developed in this thesis produce more thorough test cases than ReCrash. Most notably, the solution presented here mitigates ReCrash's current inability to reproduce environment-specific failure conditions derived from on-demand instantiation. This work can potentially be extended to apply in less-intrusive frameworks that operate at the same level as AOP to address the shortcomings identified in this analysis. / Graduate
67

Architectures génériques pour des systèmes autonomiques multi-objectifs ouverts : application aux micro-grilles intelligentes / Generic architectures for open, multi-objective autonomic systems : application to smart micro-grids

Frey, Sylvain 06 December 2013 (has links)
L’autonomicité - la capacité des systèmes à se gérer eux-mêmes - est une qualité nécessaire pour parvenir à contrôler des systèmes complexes, c’est à dire des systèmes ouverts, à grande échelle, dynamiques, composés de sous-systèmes tiers hétérogènes et suivant de multiples objectifs, éventuellement en conflit. Dans cette thèse, nous cherchons à fournir des supports génériques et réutilisables pour la conception de tels systèmes autonomiques complexes. Nous proposons une formalisation des objectifs de gestion, une architecture générique pour la conception de systèmes autonomiques multi-objectifs et adaptables, et des organisations génériques pour l’intégration de tels systèmes autonomiques. Nous appliquons nôtre approche au cas d’utilisation des réseaux électriques intelligents, qui sont un parfait exemple de complexité. Nous présentons une plateforme de simulation que nous avons développée et via laquelle nous illustrons nôtre approche, au travers de plusieurs scénarios de simulation. / Autonomic features, i.e. the capability of systems to manage themselves, are necessary to control complex systems, i.e. systems that are open, large scale, dynamic, comprise heterogeneous third-party sub-systems and follow multiple, sometimes conflicting objectives. In this thesis, we aim to provide generic reusable supports for designing complex autonomic systems. We propose a formalisation of management objectives, a generic architecture for designingadaptable multi-objective autonomic systems, and generic organisations integrating such autonomic systems.We apply our approach to the concrete case of smart micro-grids which is a relevant example of such complexity. We present a simulation platform we developped and illustrate our approach via several simulation scenarios.
68

Towards Model-Based Fault Management for Computing Systems

Jia, Rui 07 May 2016 (has links)
Large scale distributed computing systems have been extensively utilized to host critical applications in the fields of national defense, finance, scientific research, commerce, etc. However, applications in distributed systems face the risk of service outages due to inevitable faults. Without proper fault management methods, faults can lead to significant revenue loss and degradation of Quality of Service (QoS). An ideal fault management solution should guarantee fast and accurate fault diagnosis, scalability in distributed systems, portability for a variety of systems, and the versatility of recovering different types of faults. This dissertation presents a model-based fault management structure which automatically recovers computing systems from faults. This structure can recover a system from common faults while minimizing the impact on the system’s QoS. It covers all stages of fault management including fault detection, identification and recovery. It also has the flexibility to incorporate various fault diagnosis methods. When faults occur, the approach identifies fault types and intensity, and it accordingly computes the optimal recovery plan with minimum performance degradation, based on a cost function that defines performance objectives and a predictive control algorithm. The fault management approach has been verified on a centralized Web application testbed and a distributed big data processing testbed with four types of simulated faults: memory leak, network congestion, CPU hog and disk failure. The feasibility of the fault recovery control algorithm is also verified. Simulation results show that our approach enabled effective automatic recovery from faults. Performance evaluation reveals that CPU and memory overhead of the fault management process is negligible. To let domain engineers conveniently apply the proposed fault management structure on their specific systems, a component-based modeling environment is developed. The meta-model of the fault management structure is developed with Unified Modeling Language as an abstract of a general fault recovery solution for computing systems. It defines the fundamental reusable components that comprise such a system, including the connections among them, attributes of each component and constraints. The meta-model can be interpreted into a userriendly graphic modeling environment for creating application models of practical domain specific systems and generating executable codes on them.
69

Scalable Self-Organizing Server Clusters with Quality of Service Objectives

Adam, Constantin January 2005 (has links)
Advanced architectures for cluster-based services that have been recently proposed allow for service differentiation, server overload control and high utilization of resources. These systems, however, rely on centralized functions, which limit their ability to scale and to tolerate faults. In addition, they do not have built-in architectural support for automatic reconfiguration in case of failures or addition/removal of system components. Recent research in peer-to-peer systems and distributed management has demonstrated the potential benefits of decentralized over centralized designs: a decentralized design can reduce the configuration complexity of a system and increase its scalability and fault tolerance. This research focuses on introducing self-management capabilities into the design of cluster-based services. Its intended benefits are to make service platforms dynamically adapt to the needs of customers and to environment changes, while giving the service providers the capability to adjust operational policies at run-time. We have developed a decentralized design that efficiently allocates resources among multiple services inside a server cluster. The design combines the advantages of both centralized and decentralized architectures. It allows associating a set of QoS objectives with each service. In case of overload or failures, the quality of service degrades in a controllable manner. We have evaluated the performance of our design through extensive simulations. The results have been compared with performance characteristics of ideal systems. / QC 20101123
70

Trustworthy Embedded Computing for Cyber-Physical Control

Lerner, Lee Wilmoth 20 February 2015 (has links)
A cyber-physical controller (CPC) uses computing to control a physical process. Example CPCs can be found in self-driving automobiles, unmanned aerial vehicles, and other autonomous systems. They are also used in large-scale industrial control systems (ICSs) manufacturing and utility infrastructure. CPC operations rely on embedded systems having real-time, high-assurance interactions with physical processes. However, recent attacks like Stuxnet have demonstrated that CPC malware is not restricted to networks and general-purpose computers, rather embedded components are targeted as well. General-purpose computing and network approaches to security are failing to protect embedded controllers, which can have the direct effect of process disturbance or destruction. Moreover, as embedded systems increasingly grow in capability and find application in CPCs, embedded leaf node security is gaining priority. This work develops a root-of-trust design architecture, which provides process resilience to cyber attacks on, or from, embedded controllers: the Trustworthy Autonomic Interface Guardian Architecture (TAIGA). We define five trust requirements for building a fine-grained trusted computing component. TAIGA satisfies all requirements and addresses all classes of CPC attacks using an approach distinguished by adding resilience to the embedded controller, rather than seeking to prevent attacks from ever reaching the controller. TAIGA provides an on-chip, digital, security version of classic mechanical interlocks. This last line of defense monitors all of the communications of a controller using configurable or external hardware that is inaccessible to the controller processor. The interface controller is synthesized from C code, formally analyzed, and permits run-time checked, authenticated updates to certain system parameters but not code. TAIGA overrides any controller actions that are inconsistent with system specifications, including prediction and preemption of latent malwares attempts to disrupt system stability and safety. This material is based upon work supported by the National Science Foundation under Grant Number CNS-1222656. Any opinions, findings, and conclusions or recommendations expressed in this material are those of the authors and do not necessarily reflect the views of the National Science Foundation. We are grateful for donations from Xilinx, Inc. and support from the Georgia Tech Research Institute. / Ph. D.

Page generated in 0.0846 seconds