• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 2022
  • 519
  • 135
  • 117
  • 62
  • 48
  • 47
  • 47
  • 47
  • 47
  • 47
  • 45
  • 24
  • 22
  • 19
  • Tagged with
  • 3320
  • 1086
  • 717
  • 710
  • 626
  • 618
  • 441
  • 342
  • 317
  • 315
  • 287
  • 278
  • 272
  • 251
  • 233
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
91

Adaptive management for networked systems

Gonzalez Prieto, Alberto January 2006 (has links)
<p>As networked systems grow in size and dynamicity, management systems must become adaptive to changing networking conditions. The focus of the work presented in this thesis is on developing engineering principles for adaptive management systems. We investigate three problems in the context of adaptive management for networked systems.</p><p>First, we address the control of the performance of an SMS system. We present a design for policy-based performance management of such systems. The design takes as input the operator's performance goals, which are expressed as policies that can be adjusted at run-time. The system attempts to achieve the given goals by periodically solving an optimization problem that takes as input the policies and traffic statistics and computes a new configuration. We have evaluated the design through extensive simulations in various scenarios and compared it with an ideal system. A prototype has been developed on a commercial SMS platform, which proves the validity of our design.</p><p>Second, we address the problem of decentralized continuous monitoring of network state variables with configurable accuracy. Network state variables are computed from device counters using aggregation functions, such as SUM, AVERAGE and MAX. We present A-GAP, a protocol that aims at minimizing the management overhead for a configurable average error of the estimation of the global aggregate. The protocol follows the push approach to monitoring and uses the concept of incremental aggregation on a self-stabilizing spanning tree. A-GAP is decentralized and asynchronous to achieve robustness and scalability. We evaluate the protocol through simulation in several scenarios. The results show that we can effectively control the fundamental trade-off in monitoring between accuracy and overhead.</p><p>Third, we aim at improving the performance of the policy distribution task: the mechanism that provides the right policies at the right locations in the network when they are needed. Policy distribution is a key aspect for developing policy-based systems that scale, which is a must for dynamic scenarios. We present a scalable framework for policy distribution. The framework is based on aggregating the addresses of the policies and applying multipoint communication techniques. We show the validity of the framework in a case study.</p>
92

Distributed resource allocation in networked systems using decomposition techniques

Johansson, Björn January 2006 (has links)
<p>The Internet and power distribution grids are examples of ubiquitous systems that are composed of subsystems that cooperate using a communication network. We loosely define such systems as networked systems. These systems are usually designed by using trial and error. With this thesis, we aim to fill some of the many gaps in the diverse theory of networked systems. Therefore, we cast resource allocation in networked systems as optimization problems, and we investigate a versatile class of optimization problems. We then use decomposition methods to devise decentralized algorithms that solve these optimization problems.</p><p>The thesis consists of four main contributions: First, we review decomposition methods that can be used to devise decentralized algorithms for solving the posed optimization problems. Second, we consider cross-layer optimization of communication networks. Network performance can be increased if the traditionally separated network layers are jointly optimized. We investigate the interplay between the data sending rates and the allocation of resources for the communication links. The communication networks we consider have links where the data transferring capacity can be controlled. Decomposition methods are applied to the design of fully distributed protocols for two wireless network technologies: networks with orthogonal channels and network-wide resource constraints, as well as wireless networks using spatial-reuse time division multiple access. Third, we consider the problem of designing a distributed control strategy such that a linear combination of the states of a number of vehicles coincide at a given time. The vehicles are described by linear difference equations and are subject to convex input constraints. It is demonstrated how primal decomposition techniques and incremental subgradient methods allow us to find a solution in which each vehicle performs individual planning of its trajectory and exchanges critical information with neighbors only. We explore various communication, computation, and control structures. Fourth, we investigate the resource allocation problem for large-scale server clusters with quality-of-service objectives, in which key functions are decentralized. Specifically, the problem of selecting which services the servers should provide is posed as a discrete utility maximization problem. We develop an efficient centralized algorithm that solves this problem, and we propose three suboptimal schemes that operate with local information.</p>
93

Deployment cost efficiency in broadband delivery with fixed wireless relays

Timus, Bogdan January 2006 (has links)
<p>Although radio repeaters and wireless routers are commonly used, relaying techniques have received a lot of attention in academic publications the last decade. Most of the techniques proposed in the literature are based on relaying terminals. For instance groups of mobile terminals cooperate so as to jointly communicate with an access point, or to another group of mobiles in an (infrastructure-less) mobile ad-hoc network MANET. However, it has also been suggested that these techniques can be applied to hybrid cellular-relaying architecture with fixed relays and that this would reduce the infrastructure costs.</p><p>The literature shows that the coverage or capacity of a cellular network is enhanced when using relays. A common assumption in these studies is that relays are very low cost, but little attention has been given to <i>how</i> <i>cheap</i> these relays need to be in order for the technical enhancements to translate into an economic gain. It is not obvious that the techniques proposed for mobile relaying are economically feasible when applied to fixed relays.</p><p>This thesis examines the conditions under which large scale usage of fixed relays leads to lower infrastructure cost than in a purely cellular architecture, how large the benefits of these new techniques are, compared with existing repeater/router techniques, and how sensitive the results are to traditional network design parameters.</p><p>The analysis is done by means of several study cases in which coverage should be provided for broadband services by building a network from scratch. The results are expressed in terms of how cheap a relay must be with respect to a base station's cost so that the hybrid infrastructure provides the desired service at a lower cost. If in practice this relative relay cost is much lower, then high economic gains are expected.</p><p>None of the study cases considered yield substantial cost savings when using fixed relays on a large scale. When access points are placed as high as in a cellular network, the hybrid system is feasible only if the total relay cost is 3-20% of the total base station cost. When unplanned relay deployment is used, the impact of the antenna height and/or gain on the results is much greater than the particular type of amplify-and-forward relaying scheme. Planned deployment of a few relays should be used unless the cost of planning is 1-2 times larger than all the other relay costs. A proper trade-off between route-length and how tight the radio channel can be reused is essential for the feasibility of the hybrid system. The results confirm that the planned usage of few relays together with macro-like base stations is an efficient way of providing coverage. Analysis of other scenarios, such as the use of pico base stations for coverage, is left for further studies.</p>
94

Design and evaluation of network processor systems and forwarding applications

Fu, Jing January 2006 (has links)
<p>During recent years, both the Internet traffic and packet transmission rates have been growing rapidly, and new Internet services such as VPNs, QoS and IPTV have emerged. To meet increasing line speed requirements and to support current and future Internet services, improvements and changes are needed in current routers both with respect to hardware architectures and forwarding applications. High speed routers are nowadays mainly based on application specific integrated circuits (ASICs), which are custom made and not flexible enough to support diverse services. Generalpurpose processors offer flexibility, but have difficulties to in handling high data rates. A number of software IP-address lookup algorithms have therefore been developed to enable fast packet processing in general-purpose processors. Network processors have recently emerged to provide the performance of ASICs combined with the programmability of general-purpose processors.</p><p>This thesis provides an evaluation of router design including both hardware architectures and software applications. The first part of the thesis contains an evaluation of various network processor system designs. We introduce a model for network processor systems which is used as a basis for a simulation tool. Thereafter, we study two ways to organize processing elements (PEs) inside a network processor to achieve parallelism: a pipelined and a pooled organization. The impact of using multiple threads inside a single PE is also studied. In addition, we study the queueing behavior and packet delays in such systems. The results show that parallelism is crucial to achieving high performance,but both the pipelined and the pooled processing-element topologies achieve comparable performances. The detailed queueing behavior and packet delay results have been used to dimension queues, which can be used as guidelines for designing memory subsystems and queueing disciplines.</p><p>The second part of the thesis contains a performance evaluation of an IP-address lookup algorithm, the LC-trie. The study considers trie search depth, prefix vector access behavior, cache behavior, and packet lookup service time. For the packet lookup service time, the evaluation contains both experimental results and results obtained from a model. The results show that the LC-trie is an efficient route lookup algorithm for general-purpose processors, capable of performing 20 million packet lookups per second on a Pentium 4, 2.8 GHz computer, which corresponds to a 40 Gb/s link for average sized packets. Furthermore, the results show the importance of choosing packet traces when evaluating IP-address lookup algorithms: real-world and synthetically generated traces may have very different behaviors.</p><p>The results presented in the thesis are obtained through studies of both hardware architectures and software applications. They could be used to guide the design of next-generation routers.</p>
95

Access selection in multi-system architectures : cooperative and competitive contexts

Hultell, Johan January 2007 (has links)
<p>Future wireless networks will be composed of multiple radio access technologies (RATs). To benefit from these, users must utilize the appropriate RAT, and access points (APs). In this thesis we evaluate the efficiency of selection criteria that, in addition to path-loss and system bandwidth, also consider load. The problem is studied for <i>closed</i> as well as <i>open</i> systems. In the former both terminals and infrastructure are controlled by a single actor (e.g., mobile operator), while the latter refers to situations where terminals, selfishly, decide which AP it wants to use (as in a common market-place). We divide the overall problem into the prioritization between available RATs and, within a RAT, between the APs. The results from our studies suggest that data users, in general, should be served by the RAT offering highest peak data rate.</p><p>As this can be estimated by terminals, the benefits from centralized RAT selection is limited. Within a subsystem, however, load-sensitive AP selection criteria can increase data-rates. Highest gains are obtained when the subsystem is noise-limited, deployment unplanned, and the relative difference in number of users per AP significant. Under these circumstances the maximum supported load can be increased by an order of magnitude. However, also decentralized AP selection, where greedy autonomous terminal-based agents are in charge of the selection, were shown to give these gains as long they accounted for load. We also developed a <i>game-theoretic framework</i>, where users competed for wireless resources by bidding in a proportionally fair divisible auction. The framework was applied to a scenario where revenue-seeking APs competed for traffic by selecting an appropriate price. Compared to when APs cooperated, modelled by the Nash bargaining solution, our results suggest that a competitive access market, where infrastructure is shared implicitly, generally, offers users better service at a lower cost. Although AP revenues reduce, this reduction is, relatively, small and were shown to decrease with the concavity of demand. Lastly we studied whether data services could be offered in a discontinuous high-capacity network by letting a terminal-based agent pre-fetch information that its user potentially may request at some future time-instant. This decouples the period where the information is transferred, from the time-instant when it is consumed. Our results show that above some critical AP density, considerably lower than that required for continuous coverage, services start to perform well.</p>
96

Automatic control in TCP over wireless

Möller, Niels January 2005 (has links)
<p>Over the last decade, both the Internet and mobile telephony has become parts of daily life, changing the ways we communicate and search for information. These two distinct tools are now slowly merging. The topic of this thesis is TCP over wireless, and the automatic control that is used both within the system, from the link-layer power control to the end-to-end congestion control. It consists of three main contributions.</p><p>The first contribution is a proposed split-connection scheme for downloads to a mobile terminal. A wireless mobile terminal requests a file or a web page from a proxy, which in turn requests the data from a server on the Internet. During the file transfer, the radio network controller (RNC) sends radio network feedback (RNF) messages to the proxy. These messages include information about bandwidth changes over the radio channel, and the current RNC queue length. A novel control mechanism in the proxy uses this information to adjust the sending rate. The stability and convergence speed of the proxy controller is analyzed theoretically. The performance of the proposed controller is compared to end-to-end TCP Reno, using ns-2 simulations of realistic use cases. It is shown that the proxy control is able to reduce the response time experienced by users, and increase the utilization of the radio channel. The changes are loalized to the RNC and the proxy; no changes are required to the TCP implementation in terminal or server.</p><p>The second contribution is the analysis of an uplink channel using power control and link-layer retransmissions. To be able to design the link-layer mechanisms in a systematic way, good models for the link-layer processes, and their interaction with TCP, are essential.The use of link-layer retransmissions transforms a link with constant delay and random losses into a link with random delay and almost no losses. As seen from the TCP end points, the difference between such a link and a wired one is no longer the loss rate, but the packet delay distribution. Models for the power control and link-layer retransmissions on the link are used to derive packet delay distribution, and its impact on TCP performance is investigated.</p><p>The final contribution considers ways to optimize the link-layer processes. The main result is that TCP performance, over a wireless link with random retransmission delays, can be improved by adding carefully chosen artificial delays to certain packets. The artificial delays are optimized off-line and applied on-line. The additional delay that is applied to a packet depends only on the retransmission delay experienced by that same packet, and this information is available locally at the link.</p>
97

Cross-layer optimization of wireless multi-hop networks

Soldati, Pablo January 2007 (has links)
<p>The interest in wireless communications has grown constantly for the past decades, leading to an enormous number of applications and services embraced by billions of users. In order to meet the increasing demand for mobile Internet access, several high data-rate radio networking technologies have been proposed to offer wide area high-speed wireless communications, eventually replacing fixed (wired) networks for many applications.</p><p>This thesis considers cross-layer optimization of multi-hop radio networks where the system performance can be improved if the traditionally separated network layers are jointly optimized. The networks we consider have links with variable transmission rates, influenced by the allocation of transmission opportunities and channels, modulation and coding schemes and transmit powers. First, we formulate the optimal network operation as the solution to a network utility maximization problem and review decomposition methods from mathematical programming that allow translating a centralized network optimization problem into distributed mechanisms and protocols. Second, particular focus is given to networks employing spatial-reuse TDMA, where we develop detailed distributed solutions for joint end-to-end communication rate selection, multiple time-slot transmission scheduling and power allocation which achieve the optimal network utility. In the process, we introduce a novel decomposition method for convex optimization, establish its convergence and demonstrate how it suggests a distributed solution based on flow control optimization and incremental updates of the transmission schedule. We develop a two-step procedure for distributed maximization of computing the schedule updates (maximizing congestion-weighted throughput) and suggest two schemes for distributed channel reservation and power control under realistic interference models. Third, investigate the advantages of employing multi-user detectors within a CDMA/TDMA framework. We demonstrate how column generation techniques can be combined with resource allocation schemes for the multi-access channel into a very efficient computational method. Fourth, we investigate the benefits and challenges of using the emerging OFDMA modulation scheme within our framework. Specifically, we consider the problem of assigning sub-carriers to wireless links in multi-hop mesh networks. Since the underlying mathematical programming problem is computationally hard, we develop a specialized algorithm that computes optimal near-optimal solutions in a reasonable time and suggest a heuristic for improving computation at the price of relatively modest performance losses.</p>
98

Self-organization, cooperation and control distribution in wide and local area networks

Lungaro, Pietro January 2007 (has links)
<p>To support the future requirements on wireless systems in an affordable manner it is commonly believed that multiple radio access technologies have to be combined. These technologies can be deployed by a single operator or, even, be managed by different competing operators. In order to cope with the increased complexity of such a multifaced wireless environment it has been argued that a transfer of Radio Resource Management (RRM) functionalities towards the network edges (access ports and, ultimately, user terminals) may be beneficial. In addition to detecting varying system conditions in a faster manner this would also allow a more responsive service adaptation. In this thesis we evaluate a set of self-organizing regimes, all with the purpose of supporting the distribution of control at the edge node.</p><p>Particular emphasis is put on the design of a mechanism for dynamically establishing cooperation between different network entities whether these are access ports or user terminals.</p><p>Terminal cooperation by means of multihopping is considered in the context of service provision in cellular access systems. Previously the opportunity cost associated with sharing own bandwidth, and energy loss have been seen as a major obstacle for relaying other users’ traffic. To mitigate the effects of this <i>selfish</i> behavior the concept of<i> resource</i> <i>delegation</i> is introduced and evaluated in combination with a rewarding scheme designed for compensating the energy losses induced by forwarding. The results show that our proposed schemes not only are capable of fostering significant cooperation among users, but also to create a simultaneous improvement in user utility, data rates as well as in operator revenues.</p><p>Opening up networks of user-deployed Access Points (APs) for service provision is considered a means to radically lower the cost of future wireless services. However, since these networks are deployed in an uncoordinated manner, only discontinuous coverage will be provided. The question of how dense these networks need to be, to deliver acceptable user perception, is investigated in this thesis for a set of archetypical services. The results show that already at moderate AP densities the investigated services can be provided with sufficient quality. Epidemic exchange of popular content and inter-AP cooperation are also shown to further decrease the required infrastructure density and improve the APs’ utilization respectively.</p><p>As last contribution, <i>“Word-of-Mouth”</i>, a distributed reputation-based scheme, is investigated in the context of access selection in multi-operator environments. By exchanging information concerning the Quality of Service (QoS) associated with the different networks, terminal agents can collectively reveal the capabilities of individual networks. For a vertical handover scenario we show that our proposed scheme can reward access providers capable of ensuring some degrees of QoS. By introducing a model for collusion, between low performing APs and terminal agents, we show that our proposed scheme is also robust to the dissemination of false information.</p>
99

Towards robust traffic engineering in IP networks

Gunnar, Anders January 2007 (has links)
<p>To deliver a reliable communication service it is essential for the network operator to manage how traffic flows in the network. The paths taken by the traffic is controlled by the routing function. Traditional ways of tuning routing in IP networks are designed to be simple to manage and are not designed to adapt to the traffic situation in the network. This can lead to congestion in parts of the network while other parts of the network are far from fully utilized. In this thesis we explore issues related to optimization of the routing function to balance load in the network.</p><p>We investigate methods for efficient derivation of the traffic situation using link count measurements. The advantage of using link counts is that they are easily obtained and yield a very limited amount of data. We evaluate and show that estimation based on link counts give the operator a fast and accurate description of the traffic demands. For the evaluation we have access to a unique data set of complete traffic demands from an operational IP backbone.</p><p>Furthermore, we evaluate performance of search heuristics to set weights in link-state routing protocols. For the evaluation we have access to complete traffic data from a Tier-1 IP network. Our findings confirm previous studies that use partial traffic data or synthetic traffic data. We find that optimization using estimated traffic demands has little significance to the performance of the load balancing.</p><p>Finally, we device an algorithm that finds a routing setting that is robust to shifts in traffic patterns due to changes in the interdomain routing. A set of worst case scenarios caused by the interdomain routing changes is identified and used to solve a robust routing problem. The evaluation indicates that performance of the robust routing is close to optimal for a wide variety of traffic scenarios.</p><p>The main contribution of this thesis is that we demonstrate that it is possible to estimate the traffic matrix with good accuracy and to develop methods that optimize the routing settings to give strong and robust network performance. Only minor changes might be necessary in order to implement our algorithms in existing networks.</p>
100

Real-Time Monitoring of Global Variables in Large-Scale Dynamic Systems

Wuhib, Fetahi Zebenigus January 2007 (has links)
<p>Large-scale dynamic systems, such as the Internet, as well as emerging peer-to-peer networks and computational grids, require a high level of awareness of the system state in real-time for proper and reliable operation. A key challenge is to develop monitoring functions that are efficient, scalable, robust and controllable. The thesis addresses this challenge by focusing on engineering protocols for distributed monitoring of global state variables. The global variables are network-wide aggregates, computed from local device variables using aggregation functions such as SUM, MAX, AVERAGE, etc. Furthermore, it addresses the problem of detecting threshold crossing of such aggregates. The design goals for the protocols are efficiency, quality, scalability, robustness and controllability. The work presented in this thesis has resulted in two novel protocols: a gossip-based protocol for continuous monitoring of aggregates called G-GAP, and a tree-based protocol for detecting thresh old crossings of aggregates called TCA-GAP. The protocols have been evaluated against the design goals through three complementing evaluation methods: theoretical analysis, simulation study and testbed implementation. </p>

Page generated in 0.1711 seconds