• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 63
  • 25
  • 21
  • 4
  • 3
  • 1
  • 1
  • Tagged with
  • 133
  • 133
  • 35
  • 34
  • 32
  • 31
  • 29
  • 22
  • 22
  • 21
  • 20
  • 20
  • 18
  • 17
  • 16
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
1

Cost-aware Dynamic Provisioning for Performance and Power Management

Ghanbari, Saeed 30 July 2008 (has links)
Dynamic provisioning of server boxes to applications entails an inherent performance-power trade-off for the service provider, a trade-off that has not been studied in detail. The optimal number of replicas to be dynamically provisioned to an application is ultimately the configuration that results in the highest revenue. The service provider should thus dynamically provision resources for an application only as long as the resulting reward from hosting more clients exceeds its operational costs for power and cooling. We introduce a novel cost-aware dynamic provisioning approach for the database tier of a dynamic content site. Our approach employs Support Vector Machine regression for learning a dynamically adaptive system model. We leverage this lightweight on-line learning approach for two cost-aware dynamic provisioning techniques. The first is a temperature-aware scheme which avoids temperature hot-spots within the set of provisioned machines, and hence reduces cooling costs. The second is a more general cost-aware provisioning technique using a utility function expressing monetary costs for both performance and power.
2

Cost-aware Dynamic Provisioning for Performance and Power Management

Ghanbari, Saeed 30 July 2008 (has links)
Dynamic provisioning of server boxes to applications entails an inherent performance-power trade-off for the service provider, a trade-off that has not been studied in detail. The optimal number of replicas to be dynamically provisioned to an application is ultimately the configuration that results in the highest revenue. The service provider should thus dynamically provision resources for an application only as long as the resulting reward from hosting more clients exceeds its operational costs for power and cooling. We introduce a novel cost-aware dynamic provisioning approach for the database tier of a dynamic content site. Our approach employs Support Vector Machine regression for learning a dynamically adaptive system model. We leverage this lightweight on-line learning approach for two cost-aware dynamic provisioning techniques. The first is a temperature-aware scheme which avoids temperature hot-spots within the set of provisioned machines, and hence reduces cooling costs. The second is a more general cost-aware provisioning technique using a utility function expressing monetary costs for both performance and power.
3

A programming system for sensor-driven scientific applications

Jiang, Nanyan. January 2009 (has links)
Thesis (Ph. D.)--Rutgers University, 2009. / "Graduate Program in Electrical and Computer Engineering." Includes bibliographical references (p. 106-116).
4

A realistic approach for the autonomic management of component-based enterprise systems

Bruhn, Jens January 2009 (has links)
Zugl.: Bamberg, Univ., Diss., 2009. / Zsfassung in dt. Sprache.
5

Autonomic Context Management System for Pervasive Computing

Peizhao Hu Unknown Date (has links)
Stepping into the 21st century, we see more and more evidence of the growing trend towards the amalgamation of cyberspace and the physical world. This trend emerged as computing technologies moved o_ desktops and migrated into aspects of our lives through their ubiquitous presence in the physical world. As these technologies become enmeshed in our daily routines, they begin to `disappear' from our awareness and cease to be thought of as technologies and simply become tools of everyday use. Yet even as they disappear, these technologies afford a new way for us to interact with the environments of everyday life and with the ordinary objects within these environments. The furthering of this vision will require, in many cases, the tools and applications to possess greater levels of autonomy and an awareness of the user's context. As a result, the applications gradually depend more and more for their behaviour on the information (context information) that is relevant to user interactions. However, it is difficult to develop new context-aware applications that take into account the ever-increasing amount of context information. This is because: the context information sources vary not only in their types, but also in their availability in different environments; the developers have to spend significant programming efforts in gathering, pre-processing and managing the context information when designing and developing the new applications; and, the information sources can fail from time to time, resulting in operational disruptions or service degradation. To make such context information easily and widely available for to new context-aware applications, there is a need to provide information provisioning and management at the infrastructure level. This thesis explores the issues and challenges associated with the development of an autonomic middleware system that addresses the problems discussed earlier, with a particular focus on supporting fault-tolerant context information provisioning for multiple applications, providing the support of opportunistic use of the context sources (the sensors) and, maximising overall the system's interoperability for the open, dynamic computing environments (Ubiquitous computing, for example). The research presented in this thesis makes several key contributions. First, it introduces a novel standards-based approach to model heterogeneous information sources and data preprocessing components. Second, it details the design of a standards-based approach for supporting the dynamic composition of context information sources and pre-processing components. This approach plays an important role in supporting fault-tolerant information provisioning from the sensors and the opportunistic use of these sensors. More specifically, it enables any given piece of high-level context information, as required by applications, to be derived via multiple different pre-processing models, resulting in a higher degree of reliability. Third, it describes the design and development of an autonomic context management system (ACoMS), which harnesses the first two contributions above. Finally, the thesis shows how this autonomic context management system can support context-aware routing in wireless mesh networks. These contributions are evaluated through two corresponding case studies. The first is a practical firefighting scenario with three prototypical applications that validate the design and development of ACoMS. The second is an adaptive wireless mesh surveillance camera system that validates the concept of adopting ACoMS as a cross-layer information plane to ease the prototyping and development of new adaptive protocols and systems, and illustrates the needs of adaptive controls at the sensing layer to optimise resource usage.
6

Monitoring and Diagnosis for Autonomic Systems: A Requirement Engineering Approach

Wang, Yiqiao 21 April 2010 (has links)
Autonomic computing holds great promise for software systems of the future, but at the same time poses great challenges for Software Engineering. Autonomic computing research aims to design software systems that self-configure, self-repair, self-optimize and self-protect, so as to reduce software maintenance cost while improving performance. The aim of our research is to develop tool-supported methodologies for designing and operating autonomic systems. Like other researchers in this area, we assume that autonomic system architectures consist of monitoring, analysis/diagnosis, planning, and execution components that define a feedback loop and serve as the basis for system self-management. This thesis proposes an autonomic framework founded on models of requirements and design. This framework defines the normal operation of a software system in terms of models of its requirements (goal models) and/or operation (statechart models). These models determine what to monitor and how to interpret log data in order to diagnose failures. The monitoring component collects and manages log data. The diagnostic component analyzes log data, identifies failures, and pinpoints problematic components. We transform the diagnostic problem into a propositional satisfiability (SAT) problem solvable by off-the-shelf SAT solvers. Log data are preprocessed into a compact propositional encoding that scales well with growing problem size. For repair, our compensation component executes compensation actions to restore the system to an earlier consistent state. The framework repairs failures through reconfiguration when monitoring and diagnosis use requirements. The reconfiguration component selects a best system reconfiguration that contributes most positively to the system's non-functional requirements. It selects a reconfiguration that achieves this while reconfiguring the system minimally. The framework does not currently offer a repair mechanism when monitoring and diagnosis use statecharts. We illustrate our framework with two medium-sized, publicly-available case studies. We evaluate the framework's performance through a series of experiments on randomly generated and progressively larger specifications. The results demonstrate that our approach scales well with problem size, and can be applied to industrial sized software applications.
7

Monitoring and Diagnosis for Autonomic Systems: A Requirement Engineering Approach

Wang, Yiqiao 21 April 2010 (has links)
Autonomic computing holds great promise for software systems of the future, but at the same time poses great challenges for Software Engineering. Autonomic computing research aims to design software systems that self-configure, self-repair, self-optimize and self-protect, so as to reduce software maintenance cost while improving performance. The aim of our research is to develop tool-supported methodologies for designing and operating autonomic systems. Like other researchers in this area, we assume that autonomic system architectures consist of monitoring, analysis/diagnosis, planning, and execution components that define a feedback loop and serve as the basis for system self-management. This thesis proposes an autonomic framework founded on models of requirements and design. This framework defines the normal operation of a software system in terms of models of its requirements (goal models) and/or operation (statechart models). These models determine what to monitor and how to interpret log data in order to diagnose failures. The monitoring component collects and manages log data. The diagnostic component analyzes log data, identifies failures, and pinpoints problematic components. We transform the diagnostic problem into a propositional satisfiability (SAT) problem solvable by off-the-shelf SAT solvers. Log data are preprocessed into a compact propositional encoding that scales well with growing problem size. For repair, our compensation component executes compensation actions to restore the system to an earlier consistent state. The framework repairs failures through reconfiguration when monitoring and diagnosis use requirements. The reconfiguration component selects a best system reconfiguration that contributes most positively to the system's non-functional requirements. It selects a reconfiguration that achieves this while reconfiguring the system minimally. The framework does not currently offer a repair mechanism when monitoring and diagnosis use statecharts. We illustrate our framework with two medium-sized, publicly-available case studies. We evaluate the framework's performance through a series of experiments on randomly generated and progressively larger specifications. The results demonstrate that our approach scales well with problem size, and can be applied to industrial sized software applications.
8

Self-Configuration Framework for Networked Systems and Applications

Chen, Huoping January 2008 (has links)
The increased complexity, heterogeneity and the dynamism of networked systems and applications make current configuration and management tools to be ineffective. A new paradigm to dynamically configure and manage large-scale complex and heterogeneous networked systems is critically needed. In this dissertation, we present a self configuration paradigm based on the principles of autonomic computing that can handle efficiently complexity, dynamism and uncertainty in configuring networked systems and their applications. Our approach is based on making any resource/application to operate as an Autonomic Component (that means, it can be self-configured, self-healed, self-optimized and self-protected) by using two software modules: Component Management Interface (CMI) to specify the configuration and operational policies associated with each component and Component Runtime Manager (CRM) that manages the component configurations and operations using the policies defined in CMI. We use several configuration metrics (adaptability, complexity, latency, scalability, overhead, and effectiveness) to evaluate the effectiveness of our self-configuration approach when compared to other configuration techniques. We have used our approach to dynamically configure four systems: Automatic IT system management, Dynamic security configuration of networked systems, Self-management of data backup and disaster recovery system and Automatic security patches download and installation on a large scale test bed. Our experimental results showed that by applying our self-configuration approach, the initial configuration time, the initial configuration complexity and the dynamic configuration complexity can be reduced significantly. For example, the configuration time for security patches download and installation on nine machines is reduced to 4399 seconds from 27193 seconds. Furthermore our system provides most adaptability (e.g., 100% for Snort rule set configuration) comparing to hard coded approach (e.g., 22% for Snort rule set configuration) and can improve the performance of managed system greatly. For example, in data backup and recovery system, our approach can reduce the total cost by 54.1% when network bandwidth decreases. In addition, our framework is scalable and imposes very small overhead (less than 1%) on the managed system.
9

Recovering software tuning parameters

Brake, Nevon 08 July 2008 (has links)
Autonomic Computing is an approach to designing systems that are capable of self-management. Fundamental to the autonomic ideal is a software's awareness of and ability to tune parameters that affect metrics like performance and security. Traditionally, these parameters are tuned by human experts with extensive knowledge of parameter names and effects---existing software was not designed to be self-tuning. Efforts to automate the isolation and tuning of parameters have yielded encouraging results. However, the parameters are identified manually. This thesis proposes the adaptation of reverse engineering techniques for automating the recovery of software tuning parameters. Tuning parameters from several industrially relevant applications are studied for patterns of use. These patterns are used to classify the parameters into a taxonomy, and to develop a metamodel of the source code elements and relationships needed to express them. An extractor is then built to obtain instances of the relationships from source code. The relationships are represented as graphs, which are manipulated and queried for instances of tuning parameter patterns. The recovery is implemented as a tool for finding tuning parameters in applications. Experimental results show that the approach is effective at recovering documented tuning parameters, as well as other undocumented ones. The results also indicate that the tuning parameter patterns are not specific to a particular application, or application domain. / Thesis (Master, Computing) -- Queen's University, 2008-06-28 19:36:43.291
10

AUTONOMIC WORKLOAD MANAGEMENT FOR DATABASE MANAGEMENT SYSTEMS

Zhang, Mingyi 07 May 2014 (has links)
In today’s database server environments, multiple types of workloads, such as on-line transaction processing, business intelligence and administrative utilities, can be present in a system simultaneously. Workloads may have different levels of business importance and distinct performance objectives. When the workloads execute concurrently on a database server, interference may occur and result in the workloads failing to meet the performance objectives and the database server suffering severe performance degradation. To evaluate and classify the existing workload management systems and techniques, we develop a taxonomy of workload management techniques. The taxonomy categorizes workload management techniques into multiple classes and illustrates a workload management process. We propose a general framework for autonomic workload management for database management systems (DBMSs) to dynamically monitor and control the flow of the workloads and help DBMSs achieve the performance objectives without human intervention. Our framework consists of multiple workload management techniques and performance monitor functions, and implements the monitor–analyze–plan–execute loop suggested in autonomic computing principles. When a performance issue arises, our framework provides the ability to dynamically detect the issue and to initiate and coordinate the workload management techniques. To detect severe performance degradation in database systems, we propose the use of indicators. We demonstrate a learning-based approach to identify a set of internal DBMS monitor metrics that best indicate the problem. We illustrate and validate our framework and approaches using a prototype system implemented on top of IBM DB2 Workload Manager. Our prototype system leverages the existing workload management facilities and implements a set of corresponding controllers to adapt to dynamic and mixed workloads while protecting DBMSs against severe performance degradation. / Thesis (Ph.D, Computing) -- Queen's University, 2014-05-07 13:35:42.858

Page generated in 0.097 seconds