Spelling suggestions: "subject:"distributed aprocessing"" "subject:"distributed eprocessing""
181 |
Multi-dimensional optimization for cloud based multi-tier applicationsJung, Gueyoung 09 November 2010 (has links)
Emerging trends toward cloud computing and virtualization have been opening new avenues to meet enormous demands of space, resource utilization, and energy efficiency in modern data centers. By being allowed to host many multi-tier applications in consolidated environments, cloud infrastructure providers enable resources to be shared among these applications at a very fine granularity. Meanwhile, resource virtualization has recently gained considerable attention in the design of computer systems and become a key ingredient for cloud computing. It provides significant improvement of aggregated power efficiency and high resource utilization by enabling resource consolidation. It also allows infrastructure providers to manage their resources in an agile way under highly dynamic conditions.
However, these trends also raise significant challenges to researchers and practitioners to successfully achieve agile resource management in consolidated environments. First, they must deal with very different responsiveness of different applications, while handling dynamic changes in resource demands as applications' workloads change over time. Second, when provisioning resources, they must consider management costs such as power consumption and adaptation overheads (i.e., overheads incurred by dynamically reconfiguring resources). Dynamic provisioning of virtual resources entails the inherent performance-power tradeoff. Moreover, indiscriminate adaptations can result in significant overheads on power consumption and end-to-end performance. Hence, to achieve agile resource management, it is important to thoroughly investigate various performance characteristics of deployed applications, precisely integrate costs caused by adaptations, and then balance benefits and costs. Fundamentally, the research question is how to dynamically provision available resources for all deployed applications to maximize overall utility under time-varying workloads, while considering such management costs.
Given the scope of the problem space, this dissertation aims to develop an optimization system that not only meets performance requirements of deployed applications, but also addresses tradeoffs between performance, power consumption, and adaptation overheads. To this end, this dissertation makes two distinct contributions. First, I show that adaptations applied to cloud infrastructures can cause significant overheads on not only end-to-end response time, but also server power consumption. Moreover, I show that such costs can vary in intensity and time scale against workload, adaptation types, and performance characteristics of hosted applications. Second, I address multi-dimensional optimization between server power consumption, performance benefit, and transient costs incurred by various adaptations. Additionally, I incorporate the overhead of the optimization procedure itself into the problem formulation. Typically, system optimization approaches entail intensive computations and potentially have a long delay to deal with a huge search space in cloud computing infrastructures. Therefore, this type of cost cannot be ignored when adaptation plans are designed. In this multi-dimensional optimization work, scalable optimization algorithm and hierarchical adaptation architecture are developed to handle many applications, hosting servers, and various adaptations to support various time-scale adaptation decisions.
|
182 |
Prediction based load balancing heuristic for a heterogeneous clusterSaranyan, N 09 1900 (has links)
Load balancing has been a topic of interest in both academia and industry, mainly
because of the scope for performance enhancement that is available to be exploited in
many parallel and distributed processing environments. Among the many approaches
that have been used to solve the load balancing problem, we find that only very few
use prediction of code execution times. Our reasoning for this is that the field of code prediction
is in its infancy. As of this writing, we are not aware of any prediction-based
load balancing approach that uses prediction8 of code-execution times, and uses neither
the information provided by the user, nor an off-line step that does the prediction, the
results of which are then used at run-time. In this context, it is important to note that
prior studies have indicated the feasibility of predicting the CPU requirements of general
application programs.
Our motivation in using prediction-based load balancing is to determine the feasibility
of the approach. The reasoning behind that is the following: if prediction-based load
balancing does yield good performance, then it may be worthwhile to develop a predictor
that can give a rough estimate of the length of the next CPU burst of each process. While
high accuracy of the predictor is not essential, the computation overhead of the predictor
must be sufficiently' small, so as not to offset the gain of load balancing.
As for the system, we assume a set of autonomous computers, that are connected by
a fast, shared medium. The individual nodes can vary in the additional hardware and
software that may be available in them. Further, we assume that the processes in the
workload are sequential.
The first step is to fix the parameters for our assumed predictor. Then, an algorithm
that takes into account the characteristics of the predictor is proposed. There are many
trade-off decisions in the design of the algorithm, including certain steps in which we
have relied on trial and error method to find suitable values. The next logical step is
to verify the efficiency of the algorithm. To assess its performance, we carry out event
driven simulation. We also evaluate the robustness of the algorithm with respect to the
characteristics of the predictor.
The contribution of the thesis is as follows: It proposes a load-balancing algorithm
for a heterogeneous cluster of workstations connected by a fast network. The simulation
assumes that the heterogeneity is limited to variability in processor clock rates; but
the algorithm can be applied when the nodes have other types of heterogeneity as well.
The algorithm uses prediction of CPU burst lengths as its basic input unit. The performance
of the algorithm is evaluated through event driven simulation using assumed
workload distributions. The results of the simulation show that the algorithm yields a
good improvement in response times over the scenario in which no load redistribution is
done.
|
183 |
Algorithms for distributed caching and aggregationTiwari, Mitul 29 August 2008 (has links)
Not available
|
184 |
Essays on market-based information systems design and e-supply chainGuo, Zhiling, 1974- 23 June 2011 (has links)
Not available / text
|
185 |
Investigation of a router-based approach to defense against Distributed Denial-of-Service (DDoS) attackChan, Yik-Kwan, Eric., 陳奕鈞. January 2004 (has links)
published_or_final_version / abstract / toc / Computer Science and Information Systems / Master / Master of Philosophy
|
186 |
Packet routing on mesh-connected computers張治昌, Cheung, Steven. January 1992 (has links)
published_or_final_version / Computer Science / Master / Master of Philosophy
|
187 |
High-speed network interface for commodity SMP clusters黃君保, Wong, Kwan-po. January 2000 (has links)
published_or_final_version / Computer Science and Information Systems / Master / Master of Philosophy
|
188 |
Load-balancing in distributed multi-agent computingChow, Ka-po, 周嘉寶 January 2000 (has links)
published_or_final_version / Electrical and Electronic Engineering / Master / Master of Philosophy
|
189 |
Inhabiting the information space : Paradigms of collaborative design environmentsShakarchī, ʻAlī 11 1900 (has links)
The notion of information space (iSpace) is that a collective context of
transmitters and receivers can serve as a medium to share, exchange,
and apply data and knowledge between a group of human beings or
software agents. Inhabiting this space requires a perception of its
dimensions, limits, and an understanding of the way data is diffused
between inhabitants.
One of the important aspects of iSpace is that it expands the limits of
communication between distributed designers allowing them to carry
out tasks that were very difficult to accomplish with the diverse, but
not well integrated current communication technologies.
In architecture, design team members, often rely on each others'
expertise to review and problem solve design issues as well as interact
with each other for critic, and presentations. This process is called
Collaborative Design. Applying this process of collaboration to the
iSpace to serve as a supplementary medium of communication,
rather than a replacement for it, and understanding how design team
members can use it to enhance the effectiveness of the design process
and increase the efficiency of communication, is the main focus
of this research.
The first chapter will give an overview of the research and define the
objectives and the scope of it as well as giving a background on the
evolving technological media in design practice. This chapter will also
give a summary of some case studies for collaborative design projects
as real examples to introduce the subject.
The second chapter of this research will study the collaborative design
activities with respect to the creative problem solving, the group
behaviour, and the information flow between members. It will also
examine the technical and social problems with the distributed collaboration.
The third chapter will give a definition of the iSpace and analyze its
components (epistemological, utilitarian, and cultural) based on research
done by others. It will also study the impact of the iSpace on
the design process in general and on the architectural product in
particular.
The fourth chapter will be describing software programs written as
prototypes for this research that allow for realtime and non-realtime
collaboration over the internet, tailored specifically to suit the design
team use to facilitate distributed collaboration in architecture. These
prototypes are :
1. pinUpBoard (realtime shared display board for pin-ups)
2. sketchBoard (realtime whiteboarding application with multisessions)
3. mediaBase (shared database management system)
4. teamCalendar (shared interactive calendar on the internet)
5. talkSpace (organized forums for discussions)
|
190 |
Asymptotic behaviour of an overloading queueing network with resource poolingBrown, Louise Eleanor 05 1900 (has links)
No description available.
|
Page generated in 0.0694 seconds