• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 99
  • 13
  • 9
  • 7
  • 4
  • 3
  • 2
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • Tagged with
  • 166
  • 166
  • 97
  • 73
  • 62
  • 58
  • 35
  • 28
  • 26
  • 23
  • 21
  • 21
  • 20
  • 17
  • 15
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
111

Q-Fabric: System Support for Continuous Online Quality Management

Poellabauer, Christian 12 April 2004 (has links)
The explosive growth in networked systems and applications and the increase in device capabilities (as evidenced by the availability of inexpensive multimedia devices) enable novel complex distributed applications, including video conferencing, on-demand computing services, and virtual environments. These applications' need for high performance, real-time, or reliability requires the provision of Quality of Service (QoS) guarantees along the path of information exchange between two or more communicating systems. Execution environments that are prone to dynamic variability and uncertainty make QoS provision a challenging task, e.g., changes in user behavior, resource requirements, resource availabilities, or system failures are difficult or even impossible to predict. Further, with the coexistence of multiple adaptation techniques and resource management mechanisms, it becomes increasingly important to provide an integrated or cooperative approach to distributed QoS management. This work's goals are the provision of system-level tools needed for the efficient integration of multiple adaptation approaches available at different layers of a system (e.g., application-level, operating system, or network) and the use of these tools such that distributed QoS management is performed efficiently with predictable results. These goals are addressed constructively and experimentally with the Q-Fabric architecture, which provides the required system-level mechanisms to efficiently integrate multiple adaptation techniques. The foundation of this integration is the event-based communication implemented by it, realizing a loosely-coupled group communication approach frequently found in multi-peer applications. Experimental evaluations are performed in the context of a mobile multimedia application, where the focus is directed toward efficient energy consumption on battery-operated devices. Here, integration is particularly important to prevent multiple energy management techniques found on modern mobile devices to negate the energy savings of each other.
112

Quality of service with DiffServ architecture in hybrid mesh/relay networks

Lee, Myounghwan 12 May 2010 (has links)
The objective of this research is to develop an optimized quality of service (QoS) assurance algorithm with the differentiated services (DiffServ) architecture, and a differentiated polling algorithm with efficient bandwidth allocation for QoS assurance in the hybrid multi-hop mesh/relay networks. These wide area networks (WANs), which will employ a connection-based MAC protocol, along with QoS-enabled wireless local area networks (WLANs) that use a contention-based MAC protocol, need to provide an end-to-end QoS guarantee for data communications, particularly QoS-sensitive multimedia communications. Due to the high cost of construction and maintenance of infrastructure in wireless networks, engineers and researchers have focused their investigations on wireless mesh/relay networks with lower cost and high scalability. For current wireless multi-hop networks, an end-to-end QoS guarantee is an important functionality to add, because the demand for real-time multimedia communications has recently been increasing. For real-time multimedia communication in heterogeneous networks, hybrid multi-hop mesh/relay networks using a connection-based MAC protocol, along with QoS-enabled WLANs that use a contention-based MAC protocol can be an effective multi-hop network model , as opposed to multi-hop networks with a contention-based MAC protocol without a QoS mechanism. To provide integrated QoS support for different QoS mechanisms, the design of the cross-layer DiffServ architecture that can be applied in wireless multi-hop mesh/relay networks with WLANs is desirable. For parameterized QoS that requires a specific set of QoS parameters in hybrid multi-hop networks, an optimized QoS assurance algorithm with the DiffServ architecture is proposed here that supports end-to-end QoS through a QoS enhanced WAN for multimedia communications. For a QoS assurance algorithm that requires a minimum per-hop delay, the proper bandwidth to allow the per-hop delay constraint needs to be allocated. Therefore, a polling algorithm with a differentiated strategy at multi-hop routers is proposed here. The proposed polling algorithm at a router differentially computes and distributes the polling rates for routers according to the ratio of multimedia traffic to overall traffic, the number of traffic connections, and the type of polling service. By simulating the architecture and the algorithms proposed in this thesis and by analyzing traffic with the differentiated QoS requirement, it is shown here that the architecture and the algorithms produce an excellent end-to-end QoS guarantee.
113

Study of network-service disruptions using heterogeneous data and statistical learning

Erjongmanee, Supaporn 21 January 2011 (has links)
The study of network-service disruptions caused by large-scale disturbances has mainly focused on assessing network damage; however, network-disruption responses, i.e., how the disruptions occur depending on social organizations, weather, and power resources, have been studied little. The goal of this research is to study the responses of network-service disruptions caused by large-scale disturbances with respect to (1) temporal and logical network, and (2) external factors such as weather and power resources, using real and publicly available heterogeneous data that are composed of network measurements, user inputs, organizations, geographic locations, weather, and power outage reports. Network-service disruptions at the subnet level caused by Hurricanes Katrina in 2005 and Ike in 2008 are used as the case studies. The analysis of network-disruption responses with respect to temporal and logical network shows that subnets became unreachable dependently within organization, cross organization, and cross autonomous system. Thus, temporal dependence also illustrates the characteristics of logical dependence. In addition, subnet unreachability is analyzed with respect to the external factors. It is found that subnet unreachability and the storm are weakly correlated. The weak correlation motivates us to search for root causes and discover that the majority of subnet unreachability reportedly occurred because of power outages or lack of power generators. Using the power outage data, it is found that subnet unreachability and power outages are strongly correlated.
114

MPEG-4 AVC traffic analysis and bandwidth prediction for broadband cable networks

Lanfranchi, Laetitia I. January 2008 (has links)
Thesis (M. S.)--Electrical and Computer Engineering, Georgia Institute of Technology, 2008. / Committee Chair: Bing Benny; Committee Co-Chair: Fred B-H. Juang; Committee Member: Gee-Kung Chang. Part of the SMARTech Electronic Thesis and Dissertation Collection.
115

Characterizing and improving last mile performance using home networking infrastructure

Sundaresan, Srikanth 27 August 2014 (has links)
More than a billion people access the Internet through residential broadband connections worldwide, and this number is projected to grow further. Surprisingly, little is known about some important properties of these networks: What performance do users obtain from their ISP? What factors affect performance of broadband networks? Are users bottlenecked by their ISP or by their home network? How are applications such as the Web affected by these factors? Answering these questions is difficult; there is tremendous diversity of technologies and applications in home and broadband networks. While a lot of research has tackled these questions piecemeal, the lack of a good vantage point to obtain good measurements from these networks makes it notably difficult to do a holistic characterization of the モlast mileヤ. In this dissertation we use the home gateway to characterize home and access networks and mitigate performance bottlenecks that are specific to such networks. The home gateway is uniquely situated; it is always on and, as the hub of the network, it can directly observe the home network, the access network, and user traffic. We present one such gateway- based platform, BISmark, that currently has nearly 200 active access points in over 20 countries. We do a holistic characterization of three important components of the last mile using the gateway as the vantage point: the access link that connects the user to the wider Internet, the home network to which devices connect, and Web performance, one of the most commonly used applications in today's Internet. We first describe the design, development, and deployment of the BISmark platform. BISmark uses custom gateways to enable measurements and evaluate performance opti- mizations directly from home networks. We characterize access link performance in the US using measurements from the gateway; we evaluate existing techniques and propose new techniques that help us understand these networks better. We show how access link technology and home networking hardware can affect performance. We then develop a new system that uses passive measurements at the gateway to localize bottlenecks to either the wireless network or the access link. We deploy this system in 64 homes worldwide and characterize the nature of bottlenecks, and the state of the wireless network in these homes - specifically we show how the wireless network is rarely the bottleneck as throughput exceeds 35 Mbits/s. Finally, we characterize bottlenecks that affect Web performance that are specific to the last mile. We show how latency in the last mile results in page load times stagnating at throughput exceeding 16 Mbits/s, and how simple techniques deployed at the gateway can mitigate these bottlenecks.
116

CMOS RF SOC Transmitter Front-End, Power Management and Digital Analog Interface

Leung, Matthew Chung-Hin 19 May 2008 (has links)
With the growing trend of wireless electronics, frequency spectrum is crowded with different applications. High data transfer rate solutions that operate in license-exempt frequency spectrum range are sought. The most promising candidate is the 60 GHz multi-giga bit transfer rate millimeter wave circuit. In order to provide a cost-effective solution, circuits designed in CMOS are implemented in a single SOC. In this work, a modeling technique created in Cadence shows an error of less than 3dB in magnitude and 5 degree in phase for a single transistor. Additionally, less than 3dB error of power performance for the PA is also verified. At the same time, layout strategies required for millimeter wave front-end circuits are investigated. All of these combined techniques help the design converge to one simulation platform for system level simulation. Another aspect enabling the design as a single SOC lies in integration. In order to integrate digital and analog circuits together, necessary peripheral circuits must be designed. An on-chip voltage regulator, which steps down the analog power supply voltage and is compatible with digital circuits, has been designed and has demonstrated an efficiency of 65 percent with the specific area constraint. The overall output voltage ripple generated is about 2 percent. With the necessary power supply voltage, gate voltage bias circuit designs have been illustrated. They provide feasible solutions in terms of area and power consumption. Temperature and power supply sensitivities are minimized in first two designs. Process variation is further compensated in the third design. The third design demonstrates a powerful solution that each aspect of variations is well within 10%. As the DC conditions are achieved on-chip for both the digital and analog circuits, digital and analog circuits must be connected together with a DAC. A high speed DAC is designed with special layout techniques. It is verified that the DAC can operate at a speed higher than 3 Gbps from the pulse-shaping FIR filter measurement result. With all of these integrated elements and modeling techniques, a high data transfer rate CMOS RF SOC operating at 60 GHz is possible.
117

Asset Management in Electricity Transmission Enterprises: Factors that affect Asset Management Policies and Practices of Electricity Transmission Enterprises and their Impact on Performance

Crisp, Jennifer J. January 2004 (has links)
This thesis draws on techniques from Management Science and Artificial Intelligence to explore organisational aspects of asset management in electricity transmission enterprises. In this research, factors that influence policies and practices of asset management within electricity transmission enterprises have been identified, in order to examine their interaction and how they impact the policies, practices and performance of transmission businesses. It has been found that, while there is extensive literature on the economics of transmission regulation and pricing, there is little published research linking the engineering and financial aspects of transmission asset management at a management policy level. To remedy this situation, this investigation has drawn on a wide range of literature, together with expert interviews and personal knowledge of the electricity industry, to construct a conceptual model of asset management with broad applicability across transmission enterprises in different parts of the world. A concise representation of the model has been formulated using a Causal Loop Diagram (CLD). To investigate the interactions between factors of influence it is necessary to implement the model and validate it against known outcomes. However, because of the nature of the data (a mix of numeric and non-numeric data, imprecise, incomplete and often approximate) and complexity and imprecision in the definition of relationships between elements, this problem is intractable to modelling by traditional engineering methodologies. The solution has been to utilise techniques from other disciplines. Two implementations have been explored: a multi-level fuzzy rule-based model and a system dynamics model; they offer different but complementary insights into transmission asset management. Each model shows potential for use by transmission businesses for strategic-level decision support. The research demonstrates the key impact of routine maintenance effectiveness on the condition and performance of transmission system assets. However, performance of the transmission network, is not only related to equipment performance, but is a function of system design and operational aspects, such as loading and load factor. Type and supportiveness of regulation, together with the objectives and corporate culture of the transmission organisation also play roles in promoting various strategies for asset management. The cumulative effect of all these drivers is to produce differences in asset management policies and practices, discernable between individual companies and at a regional level, where similar conditions have applied historically and today.
118

Protocol design for real time multimedia communication over high-speed wireless networks : a thesis submitted in fulfilment of the requirements for the award of Doctor of Philosophy

Abd Latif, Suhaimi bin January 2010 (has links)
The growth of interactive multimedia (IMM) applications is one of the major driving forces behind the swift evolution of next-generation wireless networks where the traffic is expected to be varying and widely diversified. The amalgamation of multimedia applications on high-speed wireless networks is somewhat a natural evolution. Wireless local area network (WLAN) was initially developed to carry non-real time data. Since this type of traffic is bursty in nature, the channel access schemes were based on contention. However real time traffic (e.g. voice, video and other IMM applications) are different from this traditional data traffic as they have stringent constraints on quality of service (QoS) metrics like delay, jitter and throughput. Employing contention free channel access schemes that are implemented on the point coordination function (PCF), as opposed to the numerous works on the contending access schemes, is the plausible and intuitive approach to accommodate these innate requirements. Published researches show that works have been done on improving the distributed coordination function (DCF) to handle IMM traffic. Since the WLAN traffic today is a mix of both, it is only natural to utilize both, DCF and PCF, in a balanced manner to leverage the inherent strengths of each of them. We saw a scope in this technique and develop a scheme that combines both contention and non-contention based phases to handle heterogeneous traffic in WLAN. Standard access scheme, like 802.11e, improves DCF functionality by trying to emulate the functions of PCF. Researchers have made a multitude of improvements on 802.11e to reduce the costs of implementing the scheme on WLAN. We explore improving the PCF, instead, as this is more stable and implementations would be less costly. The initial part of this research investigates the effectiveness of the point coordination function (PCF) for carrying interactive multimedia traffic in WLAN. The performance statistics of IMM traffic were gathered and analyzed. Our results showed that PCF-based setup for IMM traffic is most suitable for high load scenarios. We confirmed that there is a scope in improving IMM transmissions on WLAN by using the PCF. This is supported by published researches on PCF related schemes in carrying IMM traffic on WLAN. Further investigations, via simulations, revealed that partitioning the superframe (SF) duration according to the need of the IMM traffic has considerable impact on the QoS of the WLAN. A theoretical model has been developed to model the two phases, i.e., PCF and DCF, of WLAN medium access control (MAC). With this model an optimum value of the contention free period (CFP) was calculated to meet the QoS requirement of IMM traffic being transmitted. Treating IMM traffic as data traffic or equating both IMM and non-IMM together could compromise a fair treatment that should be given to these QoS sensitive traffic. A self-adaptive scheme, called MAC with Dynamic Superframe Selection (MDSS) scheme, generates an optimum SF configuration according to the QoS requirements of traversing IMM traffic. That particular scheme is shown to provide a more efficient transmission on WLAN. MDSS maximizes the utilization of CFP while providing fairness to contention period (CP). The performance of MDSS is compared to that of 802.11e, which is taken as the benchmark for comparison. Jitter and delay result for MDSS is relatively lower while throughput is higher. This confirms that MDSS is capable of making significant improvement to the standard access scheme.
119

Routing and wavelength assignment in all-optical DWDM networks with sparse wavelength conversion capabilities

Al-Fuqaha, Ala Isam. Chaudhry, Ghulam M. January 2004 (has links)
Thesis (Ph. D.)--School of Computing and Engineering. University of Missouri--Kansas City, 2004. / "A dissertation in engineering and computer networking." Advisor: Ghulam Chaudhry. Typescript. Vita. Title from "catalog record" of the print edition Description based on contents viewed Feb. 22, 2006. Includes bibliographical references (leaves 135-157). Online version of the print edition.
120

Network virtualization as enabler for cloud networking

Turull, Daniel January 2016 (has links)
The Internet has exponentially grown and now it is part of our everyday life. Internet services and applications rely on back-end servers that are deployed on local servers and data centers. With the growing use of data centers and cloud computing, the locations of these servers have been externalized and centralized, taking advantage of economies of scale. However, some applications need to define complex network topologies and require more than simple connectivity to the remote sites. Therefore, the network part of cloud computing, what is called cloud networking, needs to be improved and simplified. This thesis argues that network virtualization permits to fill the missing gap and we propose a network virtualization abstraction layer to ease the use of cloud networking for the end users. We implement a software prototype of our ideas using OpenFlow. We also evaluate our prototype with state of the art controllers that has similar functionalities for network virtualization. A second part of this thesis focuses on developing a tool for performance testing. We have improved the widely used tool pktgen with receiver functionalities. We use pktgen to generate traffic for our experiments with network virtualization. / <p>QC 20160428</p>

Page generated in 0.0591 seconds