• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 229
  • 78
  • 38
  • 24
  • 20
  • 18
  • 10
  • 6
  • 6
  • 5
  • 4
  • 4
  • 2
  • 2
  • 1
  • Tagged with
  • 544
  • 77
  • 65
  • 64
  • 60
  • 59
  • 51
  • 51
  • 48
  • 47
  • 42
  • 39
  • 37
  • 37
  • 36
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
61

Cost and performance impacts of optical amplifier technology on fiber-optic communication networks

Scott, Davidson Arthur 16 February 2010 (has links)
Master of Science
62

A GPU based X-Engine for the MeerKAT Radio Telescope

Callanan, Gareth Mitchell January 2020 (has links)
The correlator is a key component of the digital backend of a modern radio telescope array. The 64 antenna MeerKAT telescope has an FX architecture correlator consisting of 64 F-Engines and 256 X-Engines. These F- and X-Engines are all hosted on 128 custom designed FPGA processing boards. This custom board is known as a SKARAB. One SKARAB X-Engine board hosts four logical X-Engines. This SKARAB ingests data at 27.2 Gbps over a 40 GbE connection. It correlates this data in real time. GPU technology has improved significantly since SKARAB was designed. GPUs are now becoming viable alternatives to FPGAs in high performance streaming applications. The objective of this dissertation is to investigate how to build a GPU drop-in replacement X-Engine for MeerKAT and to compare this implementation to a SKARAB X-Engine. This includes the construction and analysis of a prototype GPU X-Engine. The 40 GbE ingest, GPU correlation algorithm and the software pipeline framework that links these two together were identified as the three main sub-systems to focus on in this dissertation. A number of different tools implementing these sub-systems were examined with the most suitable ones being chosen for the prototype. A prototype dual socket system was built that could process the equivalent of two SKARABs worth of X-Engine data. This prototype has two 40 GbE Mellanox NICS running the SPEAD2 library and a single Nvidia GeForce 1080Ti GPU running the xGPU library. A custom pipeline framework built on top of the Intel Threaded Building Blocks (TBB) library was designed to facilitate the ow of data between these sub-systems. The prototype system was compared to two SKARABs. For an equivalent amount of processing, the GPU X-Engine cost R143 000 while the two SKARABs cost R490 000. The power consumption of the GPU X-Engine was more than twice that of the SKARABs (400W compared 180W), while only requiring half as much rack space. GPUs as X-Engines were found to be more suitable than FPGAs when cost and density are the main priorities. When power consumption is the priority, then FPGAs should be used. When running eight logical X-Engines, 85% of the prototype's CPU cores were used while only 75% of the GPU's compute capacity was utilised. The main bottleneck on the GPU X-Engine was on the CPU side of the server. This report suggests that the next iteration of the system should offload some CPU side processing to the GPU and double the number of 40 GbE ports. This could potentially double the system throughput. When considering methods to improve this system, an FPGA/GPU hybrid X-Engine concept was developed that would combine the power saving advantage of FPGAs and the low cost to compute ratio of GPUs.
63

Resource Allocation in Future Terahertz Networks

Hedhly, Wafa 05 1900 (has links)
Terahertz (THz) band represents the unused frequency band between the microwave and optical bands and lies in the range of frequencies between 0.1 to 10 THz. As a result, the THz signal generation can be done using electronic or photonic circuits. Moreover, the channel gain has hybrid features from both microwave and optical bands allowing to reap the benefits of each band. Adopting such a technology can mitigate the spectrum scarcity and introduce a substantial solution to other systems such as visible light communications. Despite of the generous bandwidth, the THz communications suffer from high attenuation that increases with adopted frequency similar to the microwave frequency band. Furthermore, THz communications are subject to a different type of attenuation called Molecular Absorption, that depends on the chemical nature of the ambiance air. Thus, THz transmitters need to use extra power and high antenna gains to overcome signal loss and compensate the short distance range limitation. In this thesis, we investigate the pathloss model to compute the overall attenuation faced by the THz wave for different frequencies and weather conditions. Then, we use the THz technology to support the operation of uplink networks using directional narrow beams. We optimize the uplink communication network resource represented in the frequency bands and the assigned power in order to minimize the total power consumption while achieving a specific quality of service. Furthermore, we investigate the impact of weather conditions and the system’s requirements in order to guarantee a better performance.
64

Bandwidth Aggregation Across Multiple Smartphone Devices

Zeller, Bradley R 01 January 2014 (has links) (PDF)
Smartphones now account for the majority of all cell phones in use today [23]. Ubiquitous Internet access is a valuable feature offered by these devices and the vast majority of smartphone applications make use of the Internet in one way or another. However, the bandwidth offered by these cellular networks is often much lower than we typically experience on our standard home networks, leading to a less-than-optimal user experience. This makes it very challenging and frustrating to access certain types of web content such as video streaming, large file downloads, loading large webpages, etc. Given that most modern smartphones are multi-homed and are capable of ac- cessing multiple networks simultaneously, this thesis attempts to utilize all available network interfaces in order to achieve the aggregated bandwidth of each to improve the overall network performance of the phone. To do so, I implement a bandwidth aggregation system for iOS that combines the bandwidths of multiple devices located within close proximity of each other. Deployed on up to three devices, speedups of up to 1.82x were achieved for downloading a single, 10mb file. Webpage loading saw speedups of up to 1.55x.
65

A Selective Approach to Bandwidth Overbooking

Huang, Feng 23 March 2006 (has links) (PDF)
Overbooking is a technique used by network providers to increase bandwidth utilization. If the overbooking factor is chosen appropriately, additional virtual circuits can be admitted without degrading quality of service for existing customers. Most existing implementations use a single factor to accept a linear fraction of traffic requests. High values of this factor may cause the degradation of quality of service whereas low overbooking factors will result in underutilization of bandwidth. Network providers often select overbooking factors based only on aggregate average virtual circuit utilization. This paper proposes a selective overbooking scheme based on trunk size and usage profile. Experiments and analysis show that the new overbooking policy results in a superior network performance.
66

An adaptive QoS framework for integrated cellular and WLAN networks.

Min, Geyong, Mellor, John E., Al-Begain, Khalid, Wang, Xin Gang, Guan, Lin January 2005 (has links)
No / The design of a network architecture that can efficiently integrate WLAN and cellular networks is a challenging task, particularly when the objective is to make the interoperation between the two networks as seamless and as efficient as possible. To provide end-to-end quality of service (QoS) support is one of the key stages towards such a goal. Due to various constraints, such as the unbalanced capacity of the two systems, handoff from user mobility and unreliable transmission media, end-to-end QoS is difficult to guarantee. In this paper, we propose a generic reservation-based QoS model for the integrated cellular and WLAN networks. It uses an adaptation mechanism to address the above issues and to support end-to-end QoS. The validity of the proposed scheme is demonstrated via simulation experiments. The performance results reveal that this new scheme can considerably improve the system resource utilization and reduce the call blocking probability and handoff dropping probability of the integrated networks while maintaining acceptable QoS to the end users.
67

PSYCHOPHYSICALLY DEFINED GAIN CONTROL POOL AND SUMMING CIRCUIT BANDWIDTHS IN SELECTIVE PATHWAYS

Hibbeler, Patrick Joseph 01 December 2008 (has links)
No description available.
68

Design of and Decentralized Path Planning for Platoons of Miniature Autonomous Underwater Vehicles

Sylvester, Caleb Allen 28 October 2004 (has links)
Many successful control schemes for land-based or air-based groups, or platoons, of autonomous vehicles cannot be implemented in underwater applications because of their dependence upon high-bandwidth communication. In current strategies for controlling groups of autonomous underwater vehicles (AUVs), platoon size remains limited by communication bandwidth requirements. So, there is great need for advances in low-bandwidth control techniques for arbitrarily large platoons of AUVs. This thesis presents a new approach to multiple vehicle control. The concepts described herein enable an arbitrarily large platoon to be controlled while utilizing minimal inter-vehicle communication. Specifically, this thesis examines a sufficient condition on platoon commands in order for a low-bandwidth decentralized controller to exist. Knowing from this sufficient condition the necessary general form of platoon commands, a number of higher-order statistics were tested. This thesis describes and analyzes their utility as platoon commands. In addition to these theoretical developments, this thesis presents the practical design needs for the Virginia Tech miniature autonomous underwater vehicle as well as their resolution. / Master of Science
69

Bias reduction studies in nonparametric regression with applications : an empirical approach / Marike Krugell

Krugell, Marike January 2014 (has links)
The purpose of this study is to determine the effect of three improvement methods on nonparametric kernel regression estimators. The improvement methods are applied to the Nadaraya-Watson estimator with crossvalidation bandwidth selection, the Nadaraya-Watson estimator with plug-in bandwidth selection, the local linear estimator with plug-in bandwidth selection and a bias corrected nonparametric estimator proposed by Yao (2012). The di erent resulting regression estimates are evaluated by minimising a global discrepancy measure, i.e. the mean integrated squared error (MISE). In the machine learning context various improvement methods, in terms of the precision and accuracy of an estimator, exist. The rst two improvement methods introduced in this study are bootstrapped based. Bagging is an acronym for bootstrap aggregating and was introduced by Breiman (1996a) from a machine learning viewpoint and by Swanepoel (1988, 1990) in a functional context. Bagging is primarily a variance reduction tool, i.e. bagging is implemented to reduce the variance of an estimator and in this way improve the precision of the estimation process. Bagging is performed by drawing repetitive bootstrap samples from the original sample and generating multiple versions of an estimator. These replicates of the estimator are then used to obtain an aggregated estimator. Bragging stands for bootstrap robust aggregating. A robust estimator is obtained by using the sample median over the B bootstrap estimates instead of the sample mean as in bagging. The third improvement method aims to reduce the bias component of the estimator and is referred to as boosting. Boosting is a general method for improving the accuracy of any given learning algorithm. The method starts of with a sensible estimator and improves iteratively, based on its performance on a training dataset. Results and conclusions verifying existing literature are provided, as well as new results for the new methods. / MSc (Statistics), North-West University, Potchefstroom Campus, 2015
70

Bias reduction studies in nonparametric regression with applications : an empirical approach / Marike Krugell

Krugell, Marike January 2014 (has links)
The purpose of this study is to determine the effect of three improvement methods on nonparametric kernel regression estimators. The improvement methods are applied to the Nadaraya-Watson estimator with crossvalidation bandwidth selection, the Nadaraya-Watson estimator with plug-in bandwidth selection, the local linear estimator with plug-in bandwidth selection and a bias corrected nonparametric estimator proposed by Yao (2012). The di erent resulting regression estimates are evaluated by minimising a global discrepancy measure, i.e. the mean integrated squared error (MISE). In the machine learning context various improvement methods, in terms of the precision and accuracy of an estimator, exist. The rst two improvement methods introduced in this study are bootstrapped based. Bagging is an acronym for bootstrap aggregating and was introduced by Breiman (1996a) from a machine learning viewpoint and by Swanepoel (1988, 1990) in a functional context. Bagging is primarily a variance reduction tool, i.e. bagging is implemented to reduce the variance of an estimator and in this way improve the precision of the estimation process. Bagging is performed by drawing repetitive bootstrap samples from the original sample and generating multiple versions of an estimator. These replicates of the estimator are then used to obtain an aggregated estimator. Bragging stands for bootstrap robust aggregating. A robust estimator is obtained by using the sample median over the B bootstrap estimates instead of the sample mean as in bagging. The third improvement method aims to reduce the bias component of the estimator and is referred to as boosting. Boosting is a general method for improving the accuracy of any given learning algorithm. The method starts of with a sensible estimator and improves iteratively, based on its performance on a training dataset. Results and conclusions verifying existing literature are provided, as well as new results for the new methods. / MSc (Statistics), North-West University, Potchefstroom Campus, 2015

Page generated in 0.0435 seconds