361 |
Adaptive Power Management for Autonomic Resource Configuration in Large-scale Computer SystemsZhang, Ziming (Software engineer) 08 1900 (has links)
In order to run and manage resource-intensive high-performance applications, large-scale computing and storage platforms have been evolving rapidly in various domains in both academia and industry. The energy expenditure consumed to operate and maintain these cloud computing infrastructures is a major factor to influence the overall profit and efficiency for most cloud service providers. Moreover, considering the mitigation of environmental damage from excessive carbon dioxide emission, the amount of power consumed by enterprise-scale data centers should be constrained for protection of the environment.Generally speaking, there exists a trade-off between power consumption and application performance in large-scale computing systems and how to balance these two factors has become an important topic for researchers and engineers in cloud and HPC communities. Therefore, minimizing the power usage while satisfying the Service Level Agreements have become one of the most desirable objectives in cloud computing research and implementation. Since the fundamental feature of the cloud computing platform is hosting workloads with a variety of characteristics in a consolidated and on-demand manner, it is demanding to explore the inherent relationship between power usage and machine configurations. Subsequently, with an understanding of these inherent relationships, researchers are able to develop effective power management policies to optimize productivity by balancing power usage and system performance. In this dissertation, we develop an autonomic power-aware system management framework for large-scale computer systems. We propose a series of techniques including coarse-grain power profiling, VM power modelling, power-aware resource auto-configuration and full-system power usage simulator. These techniques help us to understand the characteristics of power consumption of various system components. Based on these techniques, we are able to test various job scheduling strategies and develop resource management approaches to enhance the systems' power efficiency.
|
362 |
Implementation business-to-business electronic commerce website using active server pagesTeesri, Sumuscha 01 January 2000 (has links)
E-commerce is the current approach for doing any type of business online, which uses the superior power of digital information to understand the requirements and preferences of each client and each partner, to adapt products and services for them, and then to distribute the products and services as swiftly as possible.
|
363 |
Implementation business-to-consumer electronic commerce website using asp.net web programming frameworkQuiñones, Cesar 01 January 2003 (has links)
The purpose of this project is to demonstrate an integration of real world, real time e-commerce with the knowledge and experience gained in participating in the Masters of Business Administration -- Information Management program at California State University, San Bernardino. It is this knowledge and experience that is used to create a Business-To-Consumer (B2C) electronic commerce application (ECA) using available Internet and information management technology. This project presents all aspects of the simulation beginning with the background research of the canine services and supplies industry and ending with an e-commerce simulation and post implementation audit.
|
364 |
Implementation business-to-business electronic commerce website using active server pagesTeesri, Sumuscha 01 January 2000 (has links)
E-commerce is the current approach for doing any type of business online, which uses the superior power of digital information to understand the requirements and preferences of each client and each partner, to adapt products and services for them, and then to distribute the products and services as swiftly as possible.
|
365 |
Towards a model for teaching distributed computing in a distance-based educational environmentLe Roux, Petra 02 1900 (has links)
Several technologies and languages exist for the development and implementation of distributed systems. Furthermore, several models for teaching computer programming and teaching programming in a distance-based educational environment exist. Limited literature, however, is available on models for teaching distributed computing in a distance-based educational environment. The focus of this study is to examine how distributed computing should be taught in a distance-based educational environment so as to ensure effective and quality learning for students. The required effectiveness and quality should be comparable to those for students exposed to laboratories, as commonly found in residential universities. This leads to an investigation of the factors that contribute to the success of teaching distributed computing and how these factors can be integrated into a distance-based teaching model. The study consisted of a literature study, followed by a comparative study of available tools to aid in the learning and teaching of distributed computing in a distance-based educational environment. A model to accomplish this teaching and learning is then proposed and implemented. The findings of the study highlight the requirements and challenges that a student of distributed computing in a distance-based educational environment faces and emphasises how the proposed model can address these challenges. This study employed qualitative research, as opposed to quantitative research, as qualitative research methods are designed to help researchers to understand people and the social and cultural contexts within which they live. The research methods employed are design research, since an artefact is created, and a case study, since “how” and “why” questions need to be answered. Data collection was done through a survey. Each method was evaluated via its own well-established evaluation methods, since evaluation is a crucial component of the research process. / Computing / M. Sc. (Computer Science)
|
366 |
Autonomic management in a distributed storage systemTauber, Markus January 2010 (has links)
This thesis investigates the application of autonomic management to a distributed storage system. Effects on performance and resource consumption were measured in experiments, which were carried out in a local area test-bed. The experiments were conducted with components of one specific distributed storage system, but seek to be applicable to a wide range of such systems, in particular those exposed to varying conditions. The perceived characteristics of distributed storage systems depend on their configuration parameters and on various dynamic conditions. For a given set of conditions, one specific configuration may be better than another with respect to measures such as resource consumption and performance. Here, configuration parameter values were set dynamically and the results compared with a static configuration. It was hypothesised that under non-changing conditions this would allow the system to converge on a configuration that was more suitable than any that could be set a priori. Furthermore, the system could react to a change in conditions by adopting a more appropriate configuration. Autonomic management was applied to the peer-to-peer (P2P) and data retrieval components of ASA, a distributed storage system. The effects were measured experimentally for various workload and churn patterns. The management policies and mechanisms were implemented using a generic autonomic management framework developed during this work. The motivation for both groups of experiments was to test management policies with the objective to avoid unsatisfactory situations with respect to resource consumption and performance. Such unsatisfactory situations occur when either the P2P layer or the data retrieval mechanism is configured statically. In a statically configured P2P system two unsatisfactory situations can be identified. The first arises when the frequency with which P2P node states are verified is low and membership churn is high. The P2P node state becomes inaccurate due to a high membership churn, leading to errors during the routing process and a reduction in performance. In this situation it is desirable to increase the frequency to increase P2P state accuracy. The converse situation arises when the frequency is high and churn is low. In this situation network resources are used unnecessarily, which may also reduce performance, making it desirable to decrease the frequency. In ASA’s data retrieval mechanism similar unsatisfactory situations can be identified with respect to the degree of concurrency (DOC). The DOC controls the eagerness with which multiple redundant replicas are retrieved. An unsatisfactory situation arises when the DOC is low and there is a large variation in the times taken to retrieve replicas. In this situation it is desirable to increase the DOC, because by retrieving more replicas in parallel a result can be returned to the user sooner. The converse situation arises when the DOC is high, there is little variation in retrieval time and there is a network bottleneck close to the requesting client. In this situation it is desirable to decrease the DOC, since the low variation removes any benefit in parallel retrieval, and the bottleneck means that decreasing parallelism reduces both bandwidth consumption and elapsed time for the user. The experimental evaluations of autonomic management show promising results, and suggest several future research topics. These include optimisations of the managed mechanisms, alternative management policies, different evaluation methods, and the application of developed management mechanisms to other facets of a distributed storage system. The findings of this thesis could be exploited in building other distributed storage systems that focus on harnessing storage on user workstations, since these are particularly likely to be exposed to varying, unpredictable conditions.
|
367 |
The Sea of Stuff : a model to manage shared mutable data in a distributed environmentConte, Simone Ivan January 2019 (has links)
Managing data is one of the main challenges in distributed systems and computer science in general. Data is created, shared, and managed across heterogeneous distributed systems of users, services, applications, and devices without a clear and comprehensive data model. This technological fragmentation and lack of a common data model result in a poor understanding of what data is, how it evolves over time, how it should be managed in a distributed system, and how it should be protected and shared. From a user perspective, for example, backing up data over multiple devices is a hard and error-prone process, or synchronising data with a cloud storage service can result in conflicts and unpredictable behaviours. This thesis identifies three challenges in data management: (1) how to extend the current data abstractions so that content, for example, is accessible irrespective of its location, versionable, and easy to distribute; (2) how to enable transparent data storage relative to locations, users, applications, and services; and (3) how to allow data owners to protect data against malicious users and automatically control content over a distributed system. These challenges are studied in detail in relation to the current state of the art and addressed throughout the rest of the thesis. The artefact of this work is the Sea of Stuff (SOS), a generic data model of immutable self-describing location-independent entities that allow the construction of a distributed system where data is accessible and organised irrespective of its location, easy to protect, and can be automatically managed according to a set of user-defined rules. The evaluation of this thesis demonstrates the viability of the SOS model for managing data in a distributed system and using user-defined rules to automatically manage data across multiple nodes.
|
368 |
Distributed discovery and management of alternate internet paths with enhanced quality of serviceRakotoarivelo, Thierry, Electrical Engineering & Telecommunications, Faculty of Engineering, UNSW January 2006 (has links)
The convergence of recent technology advances opens the way to new ubiquitous environments, where network-enabled devices collectively form invisible pervasive computing and networking environments around the users. These users increasingly require extensive applications and capabilities from these devices. Recent approaches propose that cooperating service providers, at the edge of the network, offer these required capabilities (i.e services), instead of having them directly provided by the devices. Thus, the network evolves from a plain communication medium into an endless source of services. Such a service, namely an overlay application, is composed of multiple distributed application elements, which cooperate via a dynamic communication mesh, namely an overlay association. The Quality of Service (QoS) perceived by the users of an overlay application greatly depends on the QoS on the communication paths of the corresponding overlay association. This thesis asserts and shows that it is possible to provide QoS to an overlay application by using alternate Internet paths resulting from the compositions of independent consecutive paths. Moreover, this thesis also demonstrates that it is possible to discover, select and compose these independent paths in a distributed manner within an community comprising a limited large number of autonomous cooperating peers, such as the fore-mentioned service providers. Thus, the main contributions of this thesis are i) a comprehensive description and QoS characteristic analysis of these composite alternate paths, and ii) an original architecture, termed SPAD (Super-Peer based Alternate path Discovery), which allows the discovery and selection of these alternate paths in a distributed manner. SPAD is a fully distributed system with no single point of failure, which can be easily and incrementally deployed on the current Internet. It empowers the end-users at the edge of the network, allowing them to directly discover and utilize alternate paths.
|
369 |
Network Design and Routing in Peer-to-Peer and Mobile Ad Hoc NetworksMerugu, Shashidhar 19 July 2005 (has links)
Peer-to-peer networks and mobile ad hoc networks are emerging
distributed networks that share several similarities. Fundamental
among these similarities is the decentralized role of each
participating node to route messages on behalf of other nodes, and
thereby, collectively realizing communication between any pair of
nodes. Messages are routed on a topology graph that is determined by
the peer relationship between nodes. Although routing is fairly
straightforward when the topology graph is static, dynamic variations
in the peer relationship that often occur in peer-to-peer and mobile
ad hoc networks present challenges to routing.
In this thesis, we examine the interplay between routing messages and
network topology design in two classes of these networks --
unstructured peer-to-peer networks and sparsely-connected mobile ad
hoc networks.
In unstructured peer-to-peer networks, we add structure to overlay
topologies to support file sharing. Specifically, we investigate the
advantages of designing overlay topologies with small-world properties
to improve (a) search protocol performance and (b) network
utilization. We show, using simulation, that "small-world-like"
overlay topologies where every node has many close neighbors and few
random neighbors exhibit high chances of locating files close to the
source of file search query. This improvement in search protocol
performance is achieved while decreasing the traffic load on the links
in the underlying network.
In the context of sparsely-connected mobile ad hoc networks where
nodes provide connectivity via mobility, we present a protocol for
routing in space and time where the message forwarding decision
involves not only where to forward (space), but also when to forward
(time). We introduce space-time routing tables and develop methods to
compute these routing tables for those instances of ad hoc networks
where node mobility is predictable over either a finite horizon or
indefinitely due to periodicity in node motion. Furthermore, when the
node mobility is unpredictable, we investigate several forwarding
heuristics to address the scarcity in transmission opportunities in
these sparsely-connected ad hoc networks. In particular, we present
the advantages of fragmenting messages and augmenting them with
erasure codes to improve the end-to-end message delivery performance.
|
370 |
Risk-based proactive availability management - attaining high performance and resilience with dynamic self-management in Enterprise Distributed SystemsCai, Zhongtang 10 January 2008 (has links)
Complex distributed systems such as distributed information flows systems
which continuously acquire manipulate and disseminate
information across an enterprise's distributed sites and machines,
and distributed server applications co-deployed in one or multiple shared data centers,
with each of them having different performance/availability requirements
that vary over time and competing with each other for the shared resources,
have been playing a more serious role in industry and society now.
Consequently, it becomes more important for enterprise scale IT infrastructure to
provide timely and sustained/reliable delivery and processing of service requests.
This hasn't become easier, despite more than 30 years of progress in distributed
computer connectivity, availability and reliability, if not more difficult~cite{ReliableDistributedSys},
because of many reasons. Some of them are, the increasing complexity
of enterprise scale computing infrastructure; the distributed
nature of these systems which make them prone to failures,
e.g., because of inevitable Heisenbugs in these complex distributed systems;
the need to consider diverse and complex business objectives and policies
including risk preference and attitudes in enterprise computing;
the issues of performance and availability conflicts, varying importance of
sub-systems in an enterprise's distributed infrastructure which compete for
resource in currently typical shared environment; and
the best effort nature of resources such as network resources, which implies
resource availability itself an issue, etc.
This thesis proposes a novel business policy-driven risk-based automated availability management
which uses an automated decision engine to make various availability decisions and
meet business policies while optimizing overall system utility,
uses utility theory to capture users' risk attitudes,
and address the potentially conflicting business goals and resource demands in enterprise scale
distributed systems.
For the critical and complex enterprise applications,
since a key contributor to application utility is the time taken to
recover from failures, we develop a novel proactive fault tolerance approach,
which uses online methods for failure prediction to dynamically determine the acceptable amounts of
additional processing and communication resources to be used (i.e., costs)
to attain certain levels of utility and acceptable delays in failure
recovery.
Since resource availability itself is often not guaranteed in typical shared enterprise
IT environments, this thesis provides IQ-Paths with probabilistic
service guarantee, to address the dynamic network
behavior in realistic enterprise computing environment.
The risk-based formulation is used as an effective
way to link the operational guarantees expressed by utility and
enforced by the PGOS algorithm with the higher level business objectives sought
by end users.
Together, this thesis proposes novel availability management framework and methods for
large-scale enterprise applications and systems, with the goal to provide different
levels of performance/availability guarantees for multiple applications and
sub-systems in a complex shared distributed computing infrastructure. More specifically,
this thesis addresses the following problems. For data center environments,
(1) how to provide availability management for applications and systems that
vary in both resource requirements and in their importance to the enterprise,
based both on operational level quantities and on business level objectives;
(2) how to deal with managerial policies such as risk attitude; and
(3) how to deal with the tradeoff between performance and availability,
given limited resources in a typical data center.
Since realistic business settings extend beyond single data centers, a second
set of problems addressed in this thesis concerns predictable and reliable
operation in wide area settings. For such systems, we explore (4) how to
provide high availability in widely distributed operational systems with
low cost fault tolerance mechanisms, and (5) how to provide probabilistic
service guarantees given best effort network resources.
|
Page generated in 0.0472 seconds