311 |
Managing Applications and Data in Distributed Computing InfrastructuresToor, Salman Zubair January 2012 (has links)
During the last decades the demand for large-scale computational and storage resources in science has increased dramatically. New computational infrastructures enable scientists to enter a new mode of science, e-science, which complements traditional theory and experiments. E-science is inherently interdisciplinary, involving researchers from several disciplines, and also opens up for large-scale collaborative efforts where physically distributed groups of scientists share software tools and data to make scientific progress. Within the field of e-science, new challenges are emerging in managing large-scale distributed computing efforts and distributed data sets. Different models, e.g. grids and clouds, have been introduced over the years, but new solutions built on these models are needed to enable easy and flexible use of distributed computing infrastructures by application scientists. In the first part of the thesis, application execution environments are studied. The goal is to hide technical details of the underlying distributed computing infrastructure and expose secure and user-friendly environments to the end users. First, a general-purpose solution using portal technology is described, enabling transparent and easy usage of a variety of grid systems. Then a problem-solving environment for genetic analysis is presented. Here the statistical software R is used as a workflow engine, enhanced with grid-enabled routines for performing the computationally demanding parts of the analysis. Finally, the issue of resource allocation in grid system is briefly studied and certain modifications in the distributed resource-brokering model for the ARC middleware are proposed. The second part of the thesis presents solutions for managing and analyzing scientific data using distributed storage resources. First, a new reliable and secure file-oriented distributed storage system, Chelonia, is presented. The architectural design of the system is described and implementation issues are considered. Also, the stability and scalable performance of Chelonia is verified using several test scenarios. Then, tools for providing an efficient and easy-to-use platform for data analysis built on Chelonia are presented. Here, a database driven approach is explored. An extended architecture where Chelonia is combined with the Web-Service MEDiator (WSMED) system is implemented, providing web service tools to query data without any further programming. This approach is then developed further and Chelonia is combined with SciSPARQL, a query language that extends SPARQL to queries over numeric scientific data. This results in a system that is capable of interactive analysis of distributed data sets. Writing customized modules in Java, Python or C can fulfill advanced application-specific analysis requirements. The viability of the approach is demonstrated by applying the system to data produced by URDME, a computational environment in systems biology and results for sample queries expressed in SciSPARQL are presented. Finally, the use of an open source storage cloud, Openstack – SWIFT, for analysis of data from CERN experiments is considered. Here, a pilot implementation for the ROOT data analysis framework is presented together with a performance evaluation. / eSSENCE
|
312 |
On Distributed Optimization in Networked SystemsJohansson, Björn January 2008 (has links)
Numerous control and decision problems in networked systems can be posed as optimization problems. Examples include the framework of network utility maximization for resource allocation in communication networks, multi-agent coordination in robotics, and collaborative estimation in wireless sensor networks (WSNs). In contrast to classical distributed optimization, which focuses on improving computational efficiency and scalability, these new applications require simple mechanisms that can operate under limited communication. In this thesis, we develop several novel mechanisms for distributed optimization under communication constraints, and apply these to several challenging engineering problems. In particular, we devise three tailored optimization algorithms relying only on nearest neighbor, also known as peer-to-peer, communication. Two of the algorithms are designed to minimize a non-smooth convex additive objective function, in which each term corresponds to a node in a network. The first method is an extension of the randomized incremental subgradient method where the update order is given by a random walk on the underlying communication graph, resulting in a randomized peer-to-peer algorithm with guaranteed convergence properties. The second method combines local subgradient iterations with consensus steps to average local update directions. The resulting optimization method can be executed in a peer-to-peer fashion and analyzed using epsilon-subgradient methods. The third algorithm is a center-free algorithm, which solves a non-smooth resource allocation problem with a separable additive convex objective function subject to a constant sum constraint. Then we consider cross-layer optimization of communication networks, and demonstrate how optimization techniques allow us to engineer protocols that mimic the operation of distributed optimization algorithms to obtain an optimal resource allocation. We describe a novel use of decomposition methods for cross-layer optimization, and present a flowchart that can be used to categorize and visualize a large part of the current literature on this topic. In addition, we devise protocols that optimize the resource allocation in frequency-division multiple access (FDMA) networks and spatial reuse time-division multiple access (TDMA) networks, respectively. Next we investigate some variants of the consensus problem for multi-robot coordination, for which it is usually standard to assume that agents should meet at the barycenter of the initial states. We propose a negotiation strategy to find an optimal meeting point in the sense that the agents' trajectories to the meeting point minimize a quadratic cost criterion. Furthermore, we also demonstrate how an augmented state vector can be used to boost the convergence rate of the standard linear distributed averaging iterations, and we present necessary and sufficient convergence conditions for a general version of these iterations. Finally, we devise a generic optimization software component for WSNs. To this end, we implement some of the most promising optimization algorithms developed by ourselves and others in our WSN testbed, and present experimental results, which show that the proposed algorithms work surprisingly well. / QC 20100813
|
313 |
Empirical Testing of the Austrian Business Cycle Theory : Modelling of the Short-run Intertemporal Resource AllocationSelleby, Karl, Helmersson, Tobias January 2009 (has links)
The Austrian Business Cycle Theory (ABC) provides a qualitative explanation of why economies go through ups and downs in terms of national income, production output and labor employment. The theory states that interest and money supply policy distort the time preferences of economic agents. If the monetary authority reduces the interest rate through artificial credit expansion the new economic conditions induce both increased production and consumption. The framework of the Austrian theory depends on savings to fuel investments, i.e. reduced consumption in order to create increased future consumption. Artificially induced expansions create a wedge between these producer and consumer preferences, and prolonging of the process widens the gap between the economic state and the free market equilibrium which is long-term sustainable. When the financial system eventually is unable to maintain inflation of credit to uphold the economy, there will be abandonment of capital investments, resulting in an unavoidable recession. The purpose of this thesis is to analyze the theory from a short run perspective, using data from the United Kingdom economy. The theory has previously primarily been tested in long run perspectives and mainly on the American economy. To achieve the noted a model was constructed based on the description of the theory by economists Hayek and Garrison, members of the Austrian school of economics. To empirically model the ABC theory the ratio between consumption and investment, the intertemporal resource allocation, was calculated and used as a dependent variable in regressions with money aggregates, credit and interest rate gap as independent variables. The empirical findings give some support to the theory, with a number of those findings directly in favor of the theory. Credit was shown to better explain changes in the C/I ratio than money aggregates, indicating that credit is more directly suited for investments. The coefficient for the interest rate gap, the difference between the natural interest rate and the market interest rate, showed strong significance. Overall differences between economic expansions and recessions were found statistically significant, which lends support to the model.
|
314 |
Efficient Resource Allocation in Multiflow Wireless NetworksJanuary 2011 (has links)
We consider the problem of allocating resources in large wireless net- works in which multiple information flows must be accommodated. In particular, we seek a method for selecting schedules, routes, and power allocations for networks with terminals capable of user-cooperation at the signal level. To that end, we adopt a general information-theoretic communications model, in which the datarate of a wireless link is purely a function of transmission power, pathloss and interference. We begin by studying the case of resource allocation when only point-to-point links are available. The problem is NP-hard in this case, requiring an exponentially-complex exhaustive search to guarantee an optimal solution. This is prohibitively difficult for anything but the smallest of networks, leading us to approximate the problem using a decomposition approach. We construct the solution iteratively, developing polynomial-time algorithms to optimally allocate resources on a per-frame basis. We then update the network graph to reflect the resources consumed by the allocated frame. To manage this decomposition, we present a novel tool, termed the Network-Flow Interaction Chart. By representing the network in both space and time, our techniques trade off interference with throughput for each frame, offering considerable performance gains over schemes of similar complexity. Recognizing that our approach requires a large amount of overhead, we go on to develop a method in which it may be decentralized. We find that while the overhead is considerably lower, the limited solution space results in suboptimal solutions in a throughput sense. We conclude with a generalization of the Network-Flow Interaction Chart to address cooperative resource allocation. We represent cooperative links using "metanodes," which are made available to the allocation algorithms alongside point-to-point links and will be selected only if they offer higher throughput. The data-carrying capability of the cooperative links is modeled using Decode-and-Forward achievable rates, which are functions of transmit power and interference, and so may be incorporated directly into our framework. We demonstrate that allocations incorporating cooperation results in significant performance gains as compared to using point-to-point links alone.
|
315 |
On Tractability Aspects of Optimal Resource Allocation in OFDMA SystemsYuan, Di, Joung, Jingon, Keong Ho, Chin, Sun, Sumei January 2013 (has links)
Joint channel and rate allocation with power minimization in orthogonal frequency-division multiple access (OFDMA) has attracted extensive attention. Most of the research has dealt with the development of suboptimal but low-complexity algorithms. In this paper, the contributions comprise new insights from revisiting tractability aspects of computing the optimum solution. Previous complexity analyses have been limited by assumptions of fixed power on each subcarrier or power-rate functions that locally grow arbitrarily fast. The analysis under the former assumption does not generalize to problem tractability with variable power, whereas the latter assumption prohibits the result from being applicable to well-behaved power-rate functions. As the first contribution, we overcome the previous limitations by rigorously proving the problem's NP-hardness for the representative logarithmic rate function. Next, we extend the proof to reach a much stronger result, namely, that the problem remains NP-hard, even if the channels allocated to each user are restricted to be a consecutive block with given size. We also prove that, under these restrictions, there is a special case with polynomial-time tractability. Then, we treat the problem class where the channels can be partitioned into an arbitrarily large but constant number of groups, each having uniform gain for every individual user. For this problem class, we present a polynomial-time algorithm and provide its optimality guarantee. In addition, we prove that the recognition of this class is polynomial-time solvable. / <p>Funding Agencies|Swedish Research Council||Linkoping-Lund Excellence Center in Information Technology||Center for Industrial Information Technology of Linkoping University||</p>
|
316 |
Effective Resource Allocation for Non-cooperative Spectrum SharingJacob-David, Dany D. 13 October 2011 (has links)
Spectrum access protocols have been proposed recently to provide flexible and efficient use
of the available bandwidth. Game theory has been applied to the analysis of the problem
to determine the most effective allocation of the users’ power over the bandwidth. However,
prior analysis has focussed on Shannon capacity as the utility function, even though it is
known that real signals do not, in general, meet the Gaussian distribution assumptions of that metric. In a non-cooperative spectrum sharing environment, the Shannon capacity utility function results in a water-filling solution. In this thesis, the suitability of the water-filling solution is evaluated when using non-Gaussian signalling first in a frequency non-selective environment to focus on the resource allocation problem and its outcomes. It is then extended to a frequency selective environment to examine the proposed algorithm in a more realistic wireless environment. It is shown in both scenarios that more effective resource allocation can be achieved when the utility function takes into account the actual signal characteristics.
Further, it is demonstrated that higher rates can be achieved with lower transmitted power,
resulting in a smaller spectral footprint, which allows more efficient use of the spectrum
overall. Finally, future spectrum management is discussed where the waveform adaptation
is examined as an additional option to the well-known spectrum agility, rate and transmit
power adaptation when performing spectrum sharing.
|
317 |
Chorus: Model Kowledge Base for Perfomance Modeling in DatacentersChen, Jin 05 January 2012 (has links)
Due to the imperative need to reduce the management costs, operators multiplex several concurrent applications in large datacenters. However, uncontrolled resource sharing between co-hosted applications often results in performance degradation problems, thus creating violations of service level agreements (SLAs) for service providers. Therefore, in order to meet per-application SLAs, per-application performance modeling for dynamic resource allocation in shared resource environments has recently become promising.
We introduce Chorus, an interactive performance modeling framework for building application performance models incrementally and on the fly. It can be used to support complex, multi-tier resource allocation, and/or what-if performance inquiry in modern datacenters, such as Clouds. Chorus consists of (i) a declarative high-level language for providing semantic model guidelines, such as model templates, model functions, or sampling guidelines, from a sysadmin or a performance analyst, as model approximations to be learned or refined experimentally, (ii) a runtime engine for iteratively collecting experimental performance samples, validating and refining performance models. Chorus efficiently builds accurate models online, reuses and adjusts archival models over time, and combines them into an ensemble of models. We perform an experimental evaluation on a multi-tier server platform, using several industry- standard benchmarks. Our results show that Chorus is a flexible modeling framework and knowledge base for validating, extending and reusing existing models while adapting to new situations.
|
318 |
Genetic algorithm based resource allocation for business processesChan, Veng Ian January 2011 (has links)
University of Macau / Faculty of Science and Technology / Department of Computer and Information Science
|
319 |
Chorus: Model Kowledge Base for Perfomance Modeling in DatacentersChen, Jin 05 January 2012 (has links)
Due to the imperative need to reduce the management costs, operators multiplex several concurrent applications in large datacenters. However, uncontrolled resource sharing between co-hosted applications often results in performance degradation problems, thus creating violations of service level agreements (SLAs) for service providers. Therefore, in order to meet per-application SLAs, per-application performance modeling for dynamic resource allocation in shared resource environments has recently become promising.
We introduce Chorus, an interactive performance modeling framework for building application performance models incrementally and on the fly. It can be used to support complex, multi-tier resource allocation, and/or what-if performance inquiry in modern datacenters, such as Clouds. Chorus consists of (i) a declarative high-level language for providing semantic model guidelines, such as model templates, model functions, or sampling guidelines, from a sysadmin or a performance analyst, as model approximations to be learned or refined experimentally, (ii) a runtime engine for iteratively collecting experimental performance samples, validating and refining performance models. Chorus efficiently builds accurate models online, reuses and adjusts archival models over time, and combines them into an ensemble of models. We perform an experimental evaluation on a multi-tier server platform, using several industry- standard benchmarks. Our results show that Chorus is a flexible modeling framework and knowledge base for validating, extending and reusing existing models while adapting to new situations.
|
320 |
Effective Resource Allocation for Non-cooperative Spectrum SharingJacob-David, Dany D. 13 October 2011 (has links)
Spectrum access protocols have been proposed recently to provide flexible and efficient use
of the available bandwidth. Game theory has been applied to the analysis of the problem
to determine the most effective allocation of the users’ power over the bandwidth. However,
prior analysis has focussed on Shannon capacity as the utility function, even though it is
known that real signals do not, in general, meet the Gaussian distribution assumptions of that metric. In a non-cooperative spectrum sharing environment, the Shannon capacity utility function results in a water-filling solution. In this thesis, the suitability of the water-filling solution is evaluated when using non-Gaussian signalling first in a frequency non-selective environment to focus on the resource allocation problem and its outcomes. It is then extended to a frequency selective environment to examine the proposed algorithm in a more realistic wireless environment. It is shown in both scenarios that more effective resource allocation can be achieved when the utility function takes into account the actual signal characteristics.
Further, it is demonstrated that higher rates can be achieved with lower transmitted power,
resulting in a smaller spectral footprint, which allows more efficient use of the spectrum
overall. Finally, future spectrum management is discussed where the waveform adaptation
is examined as an additional option to the well-known spectrum agility, rate and transmit
power adaptation when performing spectrum sharing.
|
Page generated in 0.1277 seconds