• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 7
  • Tagged with
  • 26
  • 26
  • 18
  • 9
  • 9
  • 8
  • 7
  • 4
  • 4
  • 3
  • 3
  • 3
  • 3
  • 3
  • 3
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
11

Models and optimisation methods for interference coordination in self-organising cellular networks

Lopez-Perez, David January 2011 (has links)
We are at that moment of network evolution when we have realised that our telecommunication systems should mimic features of human kind, e.g., the ability to understand the medium and take advantage of its changes. Looking towards the future, the mobile industry envisions the use of fully automatised cells able to self-organise all their parameters and procedures. A fully self-organised network is the one that is able to avoid human involvement and react to the fluctuations of network, traffic and channel through the automatic/autonomous nature of its functioning. Nowadays, the mobile community is far from this fully self-organised kind of network, but they are taken the first steps to achieve this target in the near future. This thesis hopes to contribute to the automatisation of cellular networks, providing models and tools to understand the behaviour of these networks, and algorithms and optimisation approaches to enhance their performance. This work focuses on the next generation of cellular networks, in more detail, in the DownLink (DL) of Orthogonal Frequency Division Multiple Access (OFDMA) based networks. Within this type of cellular system, attention is paid to interference mitigation in self-organising macrocell scenarios and femtocell deployments. Moreover, this thesis investigates the interference issues that arise when these two cell types are jointly deployed, complementing each other in what is currently known as a two-tier network. This thesis also provides new practical approaches to the inter-cell interference problem in both macro cell and femtocell OFDMA systems as well as in two-tier networks by means of the design of a novel framework and the use of mathematical optimisation. Special attention is paid to the formulation of optimisation problems and the development of well-performing solving methods (accurate and fast).
12

Increased energy efficiency in LTE networks through reduced early handover

Kanwal, Kapil January 2017 (has links)
Long Term Evolution (LTE) is enormously adopted by several mobile operators and has been introduced as a solution to fulfil ever-growing Users (UEs) data requirements in cellular networks. Enlarged data demands engage resource blocks over prolong time interval thus results into more dynamic power consumption at downlink in Basestation. Therefore, realisation of UEs requests come at the cost of increased power consumption which directly affects operator operational expenditures. Moreover, it also contributes in increased CO2 emissions thus leading towards Global Warming. According to research, Global Information and Communication Technology (ICT) systems consume approximately 1200 to 1800 Terawatts per hour of electricity annually. Importantly mobile communication industry is accountable for more than one third of this power consumption in ICT due to increased data requirements, number of UEs and coverage area. Applying these values to global warming, telecommunication is responsible for 0.3 to 0.4 percent of worldwide CO2 emissions. Moreover, user data volume is expected to increase by a factor of 10 every five years which results in 16 to 20 percent increase in associated energy consumption which directly effects our environment by enlarged global warming. This research work focuses on the importance of energy saving in LTE and initially propose bandwidth expansion based energy saving scheme which combines two resource blocks together to form single super RB, thereby resulting in reduced Physical Downlink Control Channel Overhead (PDCCH). Thus, decreased PDCCH overhead helps in reduced dynamic power consumption up to 28 percent. Subsequently, novel reduced early handover (REHO) based idea is proposed and combined with bandwidth expansion to form enhanced energy ii saving scheme. System level simulations are performed to investigate the performance of REHO scheme; it was found that reduced early handover provided around 35% improved energy saving while compared to LTE standard in 3rd Generation Partnership Project (3GPP) based scenario. Since there is a direct relationship between energy consumption, CO2 emissions and vendors operational expenditure (OPEX); due to reduced power consumption and increased energy efficiency, REHO subsequently proven to be a step towards greener communication with lesser CO2 footprint and reduced operational expenditure values. The main idea of REHO lies in the fact that it initiate handovers earlier and turn off freed resource blocks as compare to LTE standard. Therefore, the time difference (Transmission Time Intervals) between REHO based early handover and LTE standard handover is a key component for energy saving achieved, which is estimated through axiom of Euclidean geometry. Moreover, overall system efficiency is investigated through the analysis of numerous performance related parameters in REHO and LTE standard. This led to a key finding being made to guide the vendors about the choice of energy saving in relation to radio link failure and other important parameters.
13

Reducing the Cost of Operating a Datacenter Network

Curtis, Andrew January 2012 (has links)
Datacenters are a significant capital expense for many enterprises. Yet, they are difficult to manage and are hard to design and maintain. The initial design of a datacenter network tends to follow vendor guidelines, but subsequent upgrades and expansions to it are mostly ad hoc, with equipment being upgraded piecemeal after its amortization period runs out and equipment acquisition is tied to budget cycles rather than changes in workload. These networks are also brittle and inflexible. They tend to be manually managed, and cannot perform dynamic traffic engineering. The high-level goal of this dissertation is to reduce the total cost of owning a datacenter by improving its network. To achieve this, we make the following contributions. First, we develop an automated, theoretically well-founded approach to planning cost-effective datacenter upgrades and expansions. Second, we propose a scalable traffic management framework for datacenter networks. Together, we show that these contributions can significantly reduce the cost of operating a datacenter network. To design cost-effective network topologies, especially as the network expands over time, updated equipment must coexist with legacy equipment, which makes the network heterogeneous. However, heterogeneous high-performance network designs are not well understood. Our first step, therefore, is to develop the theory of heterogeneous Clos topologies. Using our theory, we propose an optimization framework, called LEGUP, which designs a heterogeneous Clos network to implement in a new or legacy datacenter. Although effective, LEGUP imposes a certain amount of structure on the network. To deal with situations when this is infeasible, our second contribution is a framework, called REWIRE, which using optimization to design unstructured DCN topologies. Our results indicate that these unstructured topologies have up to 100-500\% more bisection bandwidth than a fat-tree for the same dollar cost. Our third contribution is two frameworks for datacenter network traffic engineering. Because of the multiplicity of end-to-end paths in DCN fabrics, such as Clos networks and the topologies designed by REWIRE, careful traffic engineering is needed to maximize throughput. This requires timely detection of elephant flows---flows that carry large amount of data---and management of those flows. Previously proposed approaches incur high monitoring overheads, consume significant switch resources, or have long detection times. We make two proposals for elephant flow detection. First, in the Mahout framework, we suggest that such flows be detected by observing the end hosts' socket buffers, which provide efficient visibility of flow behavior. Second, in the DevoFlow framework, we add efficient stats-collection mechanisms to network switches. Using simulations and experiments, we show that these frameworks reduce traffic engineering overheads by at least an order of magnitude while still providing near-optimal performance.
14

Practical design of optimal wireless metropolitan area networks : model and algorithms for OFDMA networks

Gordejuela Sánchez, Fernando January 2009 (has links)
This thesis contributes to the study of the planning and optimisation of wireless metropolitan area networks, in particular to the access network design of OFDMAbased systems, where different parameters like base station position, antenna tilt and azimuth need to be configured during the early stages of the network life. A practical view for the solution of this problem is presented by means of the development of a novel design framework and the use of multicriteria optimisation. A further consideration of relaying and cooperative communications in the context of the design of this kind of networks is done, an area little researched. With the emergence of new technologies and services, it is very important to accurately identify the factors that affect the design of the wireless access network and define how to take them into account to achieve optimally performing and cost-efficient networks. The new features and flexibility of OFDMA networks seem particularly suited to the provision of different broadband services to metropolitan areas. However, until now, most existing efforts have been focused on the basic access capability networks. This thesis presents a way to deal with the trade-offs generated during the OFDMA access network design, and presents a service-oriented optimization framework that offers a new perspective for this process with consideration of the technical and economic factors. The introduction of relay stations in wireless metropolitan area networks will bring numerous advantages such as coverage extension and capacity enhancement due to the deployment of new cells and the reduction of distance between transmitter and receiver. However, the network designers will also face new challenges with the use of relay stations, since they involve a new source of interference and a complicated air interface; and this need to be carefully evaluated during the network design process. Contrary to the well known procedure of cellular network design over regular or hexagonal scenarios, the wireless network planning and optimization process aims to deal with the non-uniform characteristics of realistic scenarios, where the existence of hotspots, different channel characteristics for the users, or different service requirements will determine the final design of the wireless network. This thesis is structured in three main blocks covering important gaps in the existing literature in planning (efficient simulation) and optimisation. The formulation and ideas proposed in the former case can still be evaluated over regular scenarios, for the sake of simplicity, while the study of latter case needs to be done over specific scenarios that will be described when appropriate. Nevertheless, comments and conclusions are extrapolated to more general cases throughout this work. After an introduction and a description of the related work, this thesis first focuses on the study of models and algorithms for classical point-to-multipoint networks on Chapter 3, where the optimisation framework is proposed. Based on the framework, this work: - Identifies the technology-specific physical factors that affect most importantly the network system level simulation, planning and optimization process. - It demonstrates how to simplify the problem and translate it into a formal optimization routine with consideration of economic factors. - It provides the network provider, a detailed and clear description of different scenarios during the design process so that the most suitable solution can be found. Existing works on this area do not provide such a comprehensive framework. In Chapter 4: - The impact of the relay configuration on the network planning process is analysed. - A new simple and flexible scheme to integrate multihop communications in the Mobile WiMAX frame structure is proposed and evaluated. - Efficient capacity calculations that allow intensive system level simulations in a multihop environment are introduced. In Chapter 5: - An analysis of the optimisation procedure with the addition of relay stations and the derived higher complexity of the process is done. - A frequency plan procedure not found in the existing literature is proposed, which combines it with the use of the necessary frame fragmentation of in-band relay communications and cooperative procedures. - A novel joint two-step process for network planning and optimisation is proposed. Finally, conclusions and open issues are exposed.
15

Hard synchronous real-time communication with the time-token MAC protocol

Wang, Jun January 2009 (has links)
The timely delivery of inter-task real-time messages over a communication network is the key to successfully developing distributed real-time computer systems. These systems are rapidly developed and increasingly used in many areas such as automation industry. This work concentrates on the timed-token Medium Access Control (MAC) protocol, which is one of the most suitable candidates to support real-time communication due to its inherent timing property of bounded medium access time. The support of real-time communication with the timed-token MAC protocol has been studied using a rigorous mathematical analysis. Specifically, to guarantee the deadlines of synchronous messages (real-time messages defined in the timed-token MAC protocol), a novel and practical approach is developed for allocating synchronous bandwidth to a general message set with the minimum deadline (Dmin) larger than the Target Token Rotation Time (TTRT). Synchronous bandwidth is defined as the maximum time for which a node can transmit its synchronous messages every time it receives the token. It is a sensitive paramater in the control of synchronous message transmission and must be properly allocated to individual nodes to guarantee deadlines of real-time messages. Other issues related to the schedulability test, including the required buffer size and the Worst Case Achievable Utilisation (WCAU) of the proposed approach, are then discussed. Simulations and numerical examples demonstrate that this novel approach performs better than any previously published local synchronous bandwidth allocation (SBA) schemes, in terms of its ability to guarantee the real-time traffic. A proper selection of the TTRT, which can maximise the WCAU of the proposed SBA scheme, is addressed. The work presented in this thesis is compatible with any network standard where timed-token MAC protocol is employed and therefore can be applied by engineers building real-time systems using these standards.
16

A risk assessment and optimisation model for minimising network security risk and cost

Viduto, Valentina January 2012 (has links)
Network security risk analysis has received great attention within the scientific community, due to the current proliferation of network attacks and threats. Although, considerable effort has been placed on improving security best practices, insufficient effort has been expanded on seeking to understand the relationship between risk-related variables and objectives related to cost-effective network security decisions. This thesis seeks to improve the body of knowledge focusing on the trade-offs between financial costs and risk while analysing the impact an identified vulnerability may have on confidentiality, integrity and availability (CIA). Both security best practices and risk assessment methodologies have been extensively investigated to give a clear picture of the main limitations in the area of risk analysis. The work begins by analysing information visualisation techniques, which are used to build attack scenarios and identify additional threats and vulnerabilities. Special attention is paid to attack graphs, which have been used as a base to design a novel visualisation technique, referred to as an Onion Skin Layered Technique (OSLT), used to improve system knowledge as well as for threat identification. By analysing a list of threats and vulnerabilities during the first risk assessment stages, the work focuses on the development of a novel Risk Assessment and Optimisation Model (RAOM), which expands the knowledge of risk analysis by formulating a multi-objective optimisation problem, where objectives such as cost and risk are to be minimised. The optimisation routine is developed so as to accommodate conflicting objectives and to provide the human decision maker with an optimum solution set. The aim is to minimise the cost of security countermeasures without increasing the risk of a vulnerability being exploited by a threat and resulting in some impact on CIA. Due to the multi-objective nature of the problem a performance comparison between multi-objective Tabu Search (MOTS) Methods, Exhaustive Search and a multi-objective Genetic Algorithm (MOGA) has been also carried out. Finally, extensive experimentation has been carried out with both artificial and real world problem data (taken from the case study) to show that the method is capable of delivering solutions for real world problem data sets.
17

A novel MAC protocol for cognitive radio networks

Shah, Munam Ali January 2013 (has links)
The scarcity of bandwidth in the radio spectrum has become more vital since the demand for wireless applications has increased. Most of the spectrum bands have been allocated although many studies have shown that these bands are significantly underutilized most of the time. The problem of unavailability of spectrum bands and the inefficiency in their utilization have been smartly addressed by the cognitive radio (CR) technology which is an opportunistic network that senses the environment, observes the network changes, and then uses knowledge gained from the prior interaction with the network to make intelligent decisions by dynamically adapting transmission characteristics. In this thesis, recent research and survey about the advances in theory and applications of cognitive radio technology has been reviewed. The thesis starts with the essential background on cognitive radio techniques and systems and discusses those characteristics of CR technology, such as standards, applications and challenges that all can help make software radio more personal. It then presents advanced level material by extensively reviewing the work done so far in the area of cognitive radio networks and more specifically in medium access control (MAC) protocol of CR. The list of references will be useful to both researchers and practitioners in this area. Also, it can be adopted as a graduate-level textbook for an advanced course on wireless communication networks. The development of new technologies such as Wi-Fi, cellular phones, Bluetooth, TV broadcasts and satellite has created immense demand for radio spectrum which is a limited natural resource ranging from 30KHz to 300GHz. For every wireless application, some portion of the radio spectrum needs to be purchased, and the Federal Communication Commission (FCC) allocates the spectrum for some fee for such services. This static allocation of the radio spectrum has led to various problems such as saturation in some bands, scarcity, and lack of radio resources to new wireless applications. Most of the frequencies in the radio spectrum have been allocated although many studies have shown that the allocated bands are not being used efficiently. The CR technology is one of the effective solutions to the shortage of spectrum and the inefficiency of its utilization. In this thesis, a detailed investigation on issues related to the protocol design for cognitive radio networks with particular emphasis on the MAC layer is presented. A novel Dynamic and Decentralized and Hybrid MAC (DDH-MAC) protocol that lies between the CR MAC protocol families of globally available common control channel (GCCC) and local control channel (non-GCCC). First, a multi-access channel MAC protocol, which integrates the best features of both GCCC and non-GCCC, is proposed. Second, an enhancement to the protocol is proposed by enabling it to access more than one control channel at the same time. The cognitive users/secondary users (SUs) always have access to one control channel and they can identify and exploit the vacant channels by dynamically switching across the different control channels. Third, rapid and efficient exchange of CR control information has been proposed to reduce delays due to the opportunistic nature of CR. We have calculated the pre-transmission time for CR and investigate how this time can have a significant effect on nodes holding a delay sensitive data. Fourth, an analytical model, including a Markov chain model, has been proposed. This analytical model will rigorously analyse the performance of our proposed DDH-MAC protocol in terms of aggregate throughput, access delay, and spectrum opportunities in both the saturated and non-saturated networks. Fifth, we develop a simulation model for the DDH-MAC protocol using OPNET Modeler and investigate its performance for queuing delays, bit error rates, backoff slots and throughput. It could be observed from both the numerical and simulation results that when compared with existing CR MAC protocols our proposed MAC protocol can significantly improve the spectrum utilization efficiency of wireless networks. Finally, we optimize the performance of our proposed MAC protocol by incorporating multi-level security and making it energy efficient.
18

Reducing the Cost of Operating a Datacenter Network

Curtis, Andrew January 2012 (has links)
Datacenters are a significant capital expense for many enterprises. Yet, they are difficult to manage and are hard to design and maintain. The initial design of a datacenter network tends to follow vendor guidelines, but subsequent upgrades and expansions to it are mostly ad hoc, with equipment being upgraded piecemeal after its amortization period runs out and equipment acquisition is tied to budget cycles rather than changes in workload. These networks are also brittle and inflexible. They tend to be manually managed, and cannot perform dynamic traffic engineering. The high-level goal of this dissertation is to reduce the total cost of owning a datacenter by improving its network. To achieve this, we make the following contributions. First, we develop an automated, theoretically well-founded approach to planning cost-effective datacenter upgrades and expansions. Second, we propose a scalable traffic management framework for datacenter networks. Together, we show that these contributions can significantly reduce the cost of operating a datacenter network. To design cost-effective network topologies, especially as the network expands over time, updated equipment must coexist with legacy equipment, which makes the network heterogeneous. However, heterogeneous high-performance network designs are not well understood. Our first step, therefore, is to develop the theory of heterogeneous Clos topologies. Using our theory, we propose an optimization framework, called LEGUP, which designs a heterogeneous Clos network to implement in a new or legacy datacenter. Although effective, LEGUP imposes a certain amount of structure on the network. To deal with situations when this is infeasible, our second contribution is a framework, called REWIRE, which using optimization to design unstructured DCN topologies. Our results indicate that these unstructured topologies have up to 100-500\% more bisection bandwidth than a fat-tree for the same dollar cost. Our third contribution is two frameworks for datacenter network traffic engineering. Because of the multiplicity of end-to-end paths in DCN fabrics, such as Clos networks and the topologies designed by REWIRE, careful traffic engineering is needed to maximize throughput. This requires timely detection of elephant flows---flows that carry large amount of data---and management of those flows. Previously proposed approaches incur high monitoring overheads, consume significant switch resources, or have long detection times. We make two proposals for elephant flow detection. First, in the Mahout framework, we suggest that such flows be detected by observing the end hosts' socket buffers, which provide efficient visibility of flow behavior. Second, in the DevoFlow framework, we add efficient stats-collection mechanisms to network switches. Using simulations and experiments, we show that these frameworks reduce traffic engineering overheads by at least an order of magnitude while still providing near-optimal performance.
19

Convergence : the next big step /

Paliwal, Gaurav. January 2006 (has links)
Thesis (M.S.)--Rochester Institute of Technology, 2006. / Typescript. Includes bibliographical references (p. 158-168).
20

The characterisation and modelling of the wireless propagation channel in small cells scenarios

Fang, Cheng January 2015 (has links)
The rapid growth in wireless data traffic in recent years has placed a great strain on the wireless spectrum and the capacity of current wireless networks. In addition, the makeup of the typical wireless propagation environment is rapidly changing as a greater percentage of data traffic moves indoors, where the coverage of radio signals is poor. This dual fronted assault on coverage and capacity has meant that the tradition cellular model is no longer sustainable, as the gains from constructing new macrocells falls short of the increasing cost. The key emerging concept that can solve the aforementioned challenges is smaller base stations such as micro-, pico- and femto-cells collectively known as small cells. However with this solution come new challenges: while small cells are efficient at improving the indoor coverage and capacity; they compound the lack of spectrum even more and cause high levels of interference. Current channel models are not suited to characterise this interference as the small cells propagation environment is vast different. The result is that overall efficiency of the networks suffers. This thesis presents an investigation into the characteristics of the wireless propagation channel in small cell environments, including measurement, analysis, modelling, validation and extraction of channel data. Two comprehensive data collection campaigns were carried out, one of them employed a RUSK channel sounder and featured dual-polarised MIMO antennas. From the first dataset an empirical path loss model, adapted to typical indoor and outdoor scenarios found in small cell environments, was constructed using regression analysis and was validated using the second dataset. The model shows good accuracy for small cell environments and can be implemented in system level simulations quickly without much requirements.

Page generated in 0.1274 seconds