491 |
The Effects of Alternative Income Distribution on Resource Allocation in IndiaGuha, Arghya 07 1900 (has links)
<p>In the thesis, we examine the effects of alternative income redistribution schemes on the optimal pattern of allocation of resources. We also identify the sectors in the economy which are under strain when these redistribution schemes are implemented and the years in which the strains are felt most. We find that the redistribution of income between the lower and middle income groups in the rural sector leads to the maximum value of the objective function, which is a discounted sum of gross outputs. Alternatively, the redistribution of income between the upper and middle income groups in the urban sector consistently leads to low values of the objective function.</p> <p>We also conduct tests to determine how sensitive these results are to changes in the values of the parameters assumed. The results regarding the relative desirabilities of various redistribution schemes are found to be rather insensitive to changes in the values of the social discount rate and the savings rate. A higher availability of foreign aid increases the desirability of urban redistribution schemes. Modest requirements of post-terminal growth lead to infeasibilities for most redistribution schemes, as well as the reference solution, which assumes the status quo distribution of income. The only feasible redistribution schemes are those which redistribute incomes between the upper and middle classes, and the middle and lower classes in the rural sector. This leads us to recommend rural redistribution as not only a desirable policy, but as a necessary prerequisite to obtaining modest growth rates in the post-plan period.</p> / Doctor of Philosophy (PhD)
|
492 |
Configuration Modeling and Diagnosis in Data CentersSondur, Sanjeev, 0000-0002-6013-6888 January 2020 (has links)
The behavior of all cyber-systems in a data center or an enterprise system largely depends on their configuration which describes the resource allocations to achieve the desired goal under certain constraints. Poorly configured systems become a bottleneck for satisfying the desired goal and add to unnecessary overheads such as under-utilization, loss of functionality, poor performance, economic burden, energy consumption, etc. Ill-effects related to system misconfiguration are well documented with quantifiable metrics showing their impact on the economy, security incidents, service recovery time, loss of confidence, social impact, etc. However, configuration modeling and diagnosis of data center systems is challenging because of the complexities of subsystem interactions and the many (known and unknown) parameters that influence the behavior of the system. Further, a configuration is not a static object - but a dynamically evolving entity that requires changes (either automatically or manually) to address the evolving state of the system. We believe that a well-defined approach for configuration modeling is important as it paves a path to keeps the systems functioning properly in spite of the dynamic changes to configurations.
Proper configuration of large systems is difficult because interdependencies between various configuration parameters and their impact on performance or other attributes of the system are generally poorly understood. Consequently, properly configuring a system or a subsystem/device within it is largely dependent on expert knowledge developed over time. In this work, we attempt to formalize some approaches to configuration management, particularly in the area of network devices and Cloud/Edge storage solutions. In particular, we address the following aspects in this study: (i) impact of resource allocation on the energy-performance trade-off, with a network topology as an example, (ii) prediction of performance of a complex IT system (such as Cloud Storage Gateway or an Edge Storage Infrastructure) under given conditions, (iii) development of a data-driven method to efficiently configure (allocate resources) to satisfy required QoS levels under constrained conditions, and (iv) a model to express configuration health as a quantifiable metric.
With increasing stress on data center networks and correspondingly increasing energy consumption, we propose a method to simultaneously configure routing and energy management related parameters to ensure that the network can both avoid congestion and maximize opportunities for putting network ports in lower power mode.
We also study the problem of choosing hardware and resource settings to minimize cost and achieve a given level of performance. Because of the complexity of the problem, we explored machine learning (ML) based techniques. For concreteness, we studied the problem in the context of configuring a cloud storage gateway (CSG) that involves such parameters as speed and number of CPU cores, memory size, and bandwidth, IO size and bandwidth, data and metadata cache size, etc. It turns out that it is very difficult to obtain a reliable ML model for this, and instead our approach is to use a model for the opposite problem (predicting optimal cost or performance for a given configuration) along with meta-heuristic such as genetic algorithm or simulated annealing. We show that an intelligent grouping of configuration parameters based on expected relationships between parameters and relative importance of the groups substantially outperforms the standard meta-heuristic based exploration of the state space.
Our work in the configuration space revealed a dominant void, we noticed the absence of common vocabulary or quantifiable metric to clearly and unambiguously express the quality of the configuration. In our diagnosis work, we designed a model to define a simple, reproducible, and verifiable metric that allows users to express the quality of device configuration as a health score. Our configuration diagnosis model expresses the strength (or weakness) of a configuration as a ‘Health Index’, a vector of dimensions like performance, availability, and security. This health index will help users/administrators to identify the weak configuration objects and take remedial actions to rectify the configurations.
Our work on Configuration Modeling and Diagnosis addresses an important topic in this vast chaotic space. Using industry-driven problems and empirical data, we bring in some meaning to this complex problem. Though our research and experiments involved specific devices (network topology, Cloud Gateway, Edge Storage, network routers, etc.) - we show that the proposed solution is generic and can be adequately applied to other domains. We hope that this work will encourage other communities to explore new 'configuration' challenges in a rapidly changing IT landscape. / Computer and Information Science
|
493 |
Network Update and Service Chain Management in Software Defined NetworksChen, Yang, 0000-0003-0578-2016 January 2020 (has links)
Software Defined Networking (SDN) emerged in recent years to fundamentally change how we design, build and manage networks. To maximize the network utilization, its control plane needs to frequently update the data plane via flow migration as the network conditions change dynamically, which is known as network update. Network Function Virtualization (NFV) addresses the problems of traditional expensive hardware appliances by leveraging virtualization technology to implement network functions in software modules (middleboxes). These software modules, also called Virtual Network Functions (VNFs), are provisioned most commonly in modern networks to demonstrate their increasing importance. The technical combination of SDN and NFV enables network service providers to pick service locations from multiple available servers and maneuvers traffic through appropriate VNFs, which is known as VNF deployment. A service chain consists of multiple chained VNFs in some order. VNFs are executed on virtualization platforms, which makes them more prone to error compared with dedicated hardware. As a result, one important issue of service chain is its reliability, meaning that each type of VNF in a service chain acts properly on its function, which is known as service chain resilience.
This dissertation lists our research on the above three mentioned topics in order to improve the network performance. Details are as follows:
1. Network Update: SDNs always need to migrate flows to update the network configuration for a better system performance. However, the existing literature does not take flow path overlapping information into consideration when flows’ routes are re-allocated. Consequently, congestion happens, resulting in deadlocks among flows and link resources, which will block the update process and cause severe packet loss. We propose multiple solutions with various kinds of leisure resources in the network.
2. VNF Deployment: We focus on the VNF deployment problem with different settings and constraints, including: (1) network topology; (2) vertex capacity constraint; (3) traffic-changing effect; (4) heterogeneous or homogeneous model for one VNF kind; (5) dependency relations between VNFs. We efficiently deploy VNF instances and at the same time make sure that the processing requirement of all flows are satisfied.
3. Resilient Service Chain Management: One effective way of ensuring VNF robustness is to provision redundancy in the form of deploying backup instances besides active ones. In order to guarantee the service chain reliability, we consider both the server resource allocation and the VNF backup assignment. We aim at minimizing the total cost in terms of transmission delay and rule changes. / Computer and Information Science
|
494 |
Efficient and robust resource allocation for network function virtualizationSallam, Gamal January 2020 (has links)
With the advent of Network Function Virtualization (NFV), network services that traditionally run on proprietary dedicated hardware can now be realized using Virtual Network Functions (VNFs) that are hosted on general-purpose commodity hardware. This new network paradigm offers a great flexibility to Internet service providers (ISPs) for efficiently operating their networks (collecting network statistics, enforcing management policies, etc.). However, introducing NFV requires an investment to deploy VNFs at certain network nodes (called VNF-nodes), which has to account for practical constraints such as the deployment budget and the VNF-node limited resources. While gradually transitioning to NFV, ISPs face the problem of where to efficiently introduce NFV; here, we measure the efficiency by the amount of traffic that can be served in an NFV-enabled network. This problem is non-trivial as it is composed of two challenging subproblems: 1) placement of VNF-nodes; 2) allocation of the VNF-nodes' resources to network flows. These two subproblems must be jointly considered to satisfy the objective of serving the maximum amount of traffic.
We first consider this problem for the one-dimensional setting, where all network flows require one network function, which requires a unit of resource to process a unit of flow. In contrast to most prior work that often neglects either the budget constraint or the resource allocation constraint, we explicitly consider both of them and prove that accounting for them introduces several new challenges. Specifically, we prove that the studied problem is not only NP-hard but also non-submodular. To address these challenges, we introduce a novel relaxation method such that the objective function of the relaxed placement subproblem becomes submodular. Leveraging this useful submodular property, we propose two algorithms that achieve an approximation ratio of $\frac{1}{2}(1-1/e)$ and $\frac{1}{3}(1-1/e)$ for the original non-relaxed problem, respectively.
Next, we consider the multi-dimensional setting, where flows can require multiple network functions, which can also require a different amount of each resource to process a unit of flow. To address the new challenges arising from the multi-dimensional setting, we propose a novel two-level relaxation method that allows us to draw a connection to the sequence submodular theory and utilize the property of sequence submodularity along with the primal-dual technique to design two approximation algorithms. Finally, we perform extensive trace-driven simulations to show the effectiveness of the proposed algorithms.
While the NFV paradigm offers great flexibility to network operators for efficient management of their networks, VNF instances are typically more prone to error and more vulnerable to security threats compared with dedicated hardware devices. Therefore, the NFV paradigm also poses new challenges concerning failure resilience. That has motivated us to consider robustness with respect to the class of sequence submodular function maximization problem, which has a wide range of applications, including those in the NFV domain. Submodularity is an important property of set functions and has been extensively studied in the literature. It models set functions that exhibit a diminishing returns property, where the marginal value of adding an element to a set decreases as the set expands. This notion has been generalized to considering sequence functions, where the order of adding elements plays a crucial role and determines the function value; the generalized notion is called sequence (or string) submodularity. In this part of the dissertation, we study a new problem of robust sequence submodular maximization with cardinality constraints. The robustness is against the removal of a subset of elements in the selected sequence (e.g., due to malfunctions or adversarial attacks). Compared to robust submodular maximization for set function, new challenges arise when sequence functions are concerned. Specifically, there are multiple definitions of submodularity for sequence functions, which exhibit subtle yet critical differences. Another challenge comes from two directions of monotonicity: forward monotonicity and backward monotonicity, both of which are important to proving performance guarantees. To address these unique challenges, we design two robust greedy algorithms: while one algorithm achieves a constant approximation ratio but is robust only against the removal of a subset of contiguous elements, the other is robust against the removal of an arbitrary subset of the selected elements but requires a stronger assumption and achieves an approximation ratio that depends on the number of the removed elements.
Finally, we consider important problems that arise in the production networks, where packets need to pass through an ordered set of network functions called Service Function Chains (SFC) before reaching the destination. We study the following problems: (1) How to find an SFC-constrained shortest path between any pair of nodes? (2) What is the achievable SFC-constrained maximum flow? We propose a transformation of the network graph to minimize the computational complexity of subsequent applications of any shortest path algorithm. Moreover, we formulate the SFC-constrained maximum flow problem as a fractional multicommodity flow problem and develop a combinatorial algorithm for a special case of practical interest. / Computer and Information Science
|
495 |
Adaptive resource management for P2P live streaming systemsYuan, X., Min, Geyong, Ding, Y., Liu, Q., Liu, J., Yin, H., Fang, Q. January 2013 (has links)
no / Peer-to-Peer (P2P) has become a popular live streaming delivery technology owing to its scalability and low cost. P2P streaming systems often employ multi-channels to deliver streaming to users simultaneously, which leads to a great challenge of allocating server resources among these channels appropriately. Most existing P2P systems resort to over-allocating server resources to different channels, which results in low-efficiency and high-cost. To allocate server resources to different channels efficiently, we propose a dynamic resource allocation algorithm based on a streaming quality model for P2P live streaming systems. This algorithm can improve the channel streaming quality for multi-channel P2P live streaming system and also guarantees the streaming quality of the channels under extreme Internet conditions. In an experiment, the proposed algorithm is validated by the trace data.
|
496 |
Optimal Energy Resource Allocation in Isolated Micro Grid with Limited Supply CapacityAnuebunwa, Ugonna, Mokryani, Geev 13 October 2021 (has links)
No / An isolated micro-grid network with limited generating capacity would most likely, end up having operational challenge either due to increasing number of customers, or introduction of new loads onto the network. This is in view of an observed scenario especially in developing countries whereby as load demand increases, installed PV capacity often do not receive commensurate expansion. So, in order to prevent network failure, each user can be allocated certain amount of limited power supply which should not be exceeded. These allotments are dynamic, and they vary at regular time intervals every day depending on their historic load profile data. This work is therefore based on managing power supply from a PV-source operating as an isolated micro-grid with storage capabilities. A power supply scheduling mechanism is introduced which allocates maximum power capacity for every user. Hence communities detached from the grid can enjoy electricity despite shortfalls in power supply capacity. The obtained results evaluated under three scenarios show that allocating energy limits to each user depends on the current capacity of the battery as well as the forecast load demand. This allotment is enforced using variable circuit breakers whose cut-off point is varied based on the prevailing energy demand and supply requirements.
|
497 |
Resource allocation and NFV placement in resource constrained MEC-enabled 5G-NetworksFedrizzi, Riccardo 29 June 2023 (has links)
The fifth-generation (5G) of mobile communication networks are expected to support a large number of vertical industries requiring services with diverging requirements. To accommodate this, mobile networks are undergoing a significant transformation to enable a variety of services to coexist on the same infrastructure through network slicing. Additionally, the introduction of distributed user-plane and multi-access edge computing (MEC) technology allows the deployment of virtualised applications close to the network edge. The first part of this dissertation focuses on end-to-end network slice provisioning for various vertical industries with different service requirements. Two slice provisioning strategies are explored, by formulating a mixed integer linear programming (MILP) problem. Further, a genetic algorithm (GA)-based approach is proposed with the aim to improve search-space exploration. Simulation results show that the proposed approach is effective in providing near-optimal solutions while drastically reducing computational complexity. In a later stage, the study focuses on building a measurement-based digital twin (DT) for the highly heterogeneous MEC ecosystem. The DT operates as an intermediate and collaborative layer, enabling the orchestration layer to better understand network behavior before making changes to the physical network. Assisted by proper AI/ML solutions, the DT is envisioned to play a crucial role in automated network management. The study utilizes an emulated and physical test-bed to gather network key performance indicators (KPIs) and demonstrates the potential of graph neural network (GNN) in enabling closed loop automation with the help of DT. These findings offer a foundation for future research in the area of DT models and carbon footprint-aware orchestration.
|
498 |
A Bilevel Approach to Resource Allocation for Utility-Based Request-Response SystemsSundwall, Tanner Jack 08 May 2024 (has links) (PDF)
We present a novel bilevel programming formulation that aims to solve a resource allocation problem for request-response systems. Our formulation is motivated by potential inefficiencies in the allocation of computational resources to incoming user requests in such systems. In our experience, systems often operate with a surplus of resources despite potentially incurring unjustifiable cost. Our work attempts to optimize the tradeoff between the financial cost of resources and the opportunity cost of unfulfilled user demand. Our bilevel formulation consists of an \textit{upper} problem which has a constraint value appearing in the \textit{lower} problem. We derive efficient methods for finding global solutions to the upper problem in two settings; first with logarithmic utility functions, and then with a particular type of sigmoidal utility function. A solution to the model we describe (1) determines the optimal number of total resources to allocate and (2) determines the optimal distribution of such resources across the set of user requests.
|
499 |
Issues of Real Time Information Retrieval in Large, Dynamic and Heterogeneous Search SpacesKorah, John 10 March 2010 (has links)
Increasing size and prevalence of real time information have become important characteristics of databases found on the internet. Due to changing information, the relevancy ranking of the search results also changes. Current methods in information retrieval, which are based on offline indexing, are not efficient in such dynamic search spaces and cannot quickly provide the most current results. Due to the explosive growth of the internet, stove-piped approaches for dealing with dynamism by simply employing large computational resources are ultimately not scalable. A new processing methodology that incorporates intelligent resource allocation strategies is required. Also, modeling the dynamism in the search space in real time is essential for effective resource allocation. In order to support multi-grained dynamic resource allocation, we propose to use a partial processing approach that uses anytime algorithms to process the documents in multiple steps. At each successive step, a more accurate approximation of the final similarity values of the documents is produced. Resource allocation algorithm use these partial results to select documents for processing, decide on the number of processing steps and the computation time allocated for each step. We validate the processing paradigm by demonstrating its viability with image documents. We design an anytime image algorithm that uses a combination of wavelet transforms and machine learning techniques to map low level visual features to higher level concepts. Experimental validation is done by implementing the image algorithm within an established multiagent information retrieval framework called I-FGM. We also formulate a multiagent resource allocation framework for design and performance analysis of resource allocation with partial processing. A key aspect of the framework is modeling changes in the search space as external and internal dynamism using a grid-based search space model. The search space model divides the documents or candidates into groups based on its partial-value and portion processed. Hence the changes in the search space can be effectively represented in the search space model as flow of agents and candidates between the grids. Using comparative experimental studies and detailed statistical analysis we validate the search space model and demonstrate the effectiveness of the resource allocation framework. / Ph. D.
|
500 |
Resource Allocation and Adaptive Antennas in Cellular CommunicationsCardieri, Paulo 25 September 2000 (has links)
The rapid growth in demand for cellular mobile communications and emerging fixed wireless access has created the need to increase system capacity through more efficient utilization of the frequency spectrum, and the need for better grade of service. In cellular systems, capacity improvement can be achieved by reducing co-channel interference.
Several techniques have been proposed in literature for mitigating co-channel interference, such as adaptive antennas and power control. Also, by allocating transmitter power and communication channels efficiently (resource allocation), overall co-channel interference can be maintained below a desired maximum tolerable level, while maximizing the carried traffic of the system.
This dissertation presents investigation results on the performance of base station adaptive antennas, power control and channel allocation, as techniques for capacity improvement. Several approaches are analyzed. Firstly, we study the combined use of adaptive antennas and fractional loading factor, in order to estimate the potential capacity improvement achieved by adaptive antennas.
Next, an extensive simulation analysis of a cellular network is carried out aiming to investigate the complex interrelationship between power control, channel allocation and adaptive antennas. In the first part of this simulation analysis, the combined use of adaptive antennas, power control and reduced cluster size is analyzed in a cellular system using fixed channel allocation.
In the second part, we analyze the benefits of combining adaptive antennas, dynamic channel allocation and power control. Two representative channel allocation algorithms are considered and analyzed regarding how efficiently they transform reduced co-channel interference into higher carried traffic. Finally, the spatial filtering capability of adaptive antennas is used to allow several users to share the same channel within the same cell. Several allocation algorithms combined with power control are analyzed. / Ph. D.
|
Page generated in 0.1262 seconds