• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 194
  • 32
  • 23
  • 5
  • 5
  • 5
  • 5
  • 5
  • 5
  • 5
  • 4
  • 3
  • 2
  • 2
  • 2
  • Tagged with
  • 458
  • 458
  • 159
  • 134
  • 110
  • 87
  • 72
  • 67
  • 62
  • 61
  • 50
  • 48
  • 48
  • 46
  • 39
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.

HIDRA: Hierarchical Inter-domain Routing Architecture

Clevenger, Bryan 01 May 2010 (has links) (PDF)
As the Internet continues to expand, the global default-free zone (DFZ) forwarding table has begun to grow faster than hardware can economically keep pace with. Various policies are in place to mitigate this growth rate, but current projections indicate policy alone is inadequate. As such, a number of technical solutions have been proposed. This work builds on many of these proposed solutions, and furthers the debate surrounding the resolution to this problem. It discusses several design decisions necessary to any proposed solution, and based on these tradeoffs it proposes a Hierarchical Inter-Domain Routing Architecture - HIDRA, a comprehensive architecture with a plausible deployment scenario. The architecture uses a locator/identifier split encapsulation scheme to attenuate both the immediate size of the DFZ forwarding table, and the projected growth rate. This solution is based off the usage of an already existing number allocation policy - Autonomous System Numbers (ASNs). HIDRA has been deployed to a sandbox network in a proof-of-concept test, yielding promising results.

Optimizing Peer Selection among Internet Service Providers (ISPs)

Mustafa, Shahzeb 01 January 2021 (has links)
Connections among Internet Service Providers (ISPs) form the backbone of the Internet. This enables communications across the globe. ISPs are represented as Autonomous Systems (ASes) in the global Internet and inter-ISP traffic exchange takes place via inter-AS links, which are formed based on inter-ISP connections and agreements. In addition to customer-provider agreements, a crucial type of inter-ISP agreement is peering. ISP administrators use various platforms like AP-NIC and NANOG networking events for establishing new peering connections in accordance with their business and technical needs. Such methods are often inefficient and slow, potentially resulting in missed opportunities or sub-optimal routes. The process can take several months with excessive amounts of paperwork. We investigate developing tools and algorithms that can help make the inter-AS connection formation more dynamic and reliable by helping ISPs make informed decisions, in line with their needs. We analyze the largest public datasets from CAIDA and PeeringDB to identify common trends and requirements that ISPs have in the context of peering. Using this analysis, we develop a simple yet effective peering predictor model, that identifies ISP pairs that show promising signs of forming a good peering relation. For motivating research and development in this area, we develop an Internet eXchange Point (IXP) emulator that ISP admins can use as a testbed for analyzing different peering policies within an IXP. We further extend our ideas about peering to wireless cellular network and design a working wireless peering model, and present how optimal agreements can be reached and best wireless peering partners and areas can be chosen. With the exponential increase in traffic volume and dependency on the Internet, it is crucial that the underlying network is dynamic and robust. To this end, we address issues with peering from multiple angles and develop novel models for automation and optimization.

Performance of digital communication systems with adaptive arrays /

Ganz, Matthew W. January 1986 (has links)
No description available.

The Design of a QPSK Modem and the Probability of Error Analysis for an SCA Based Digital Communication System

Guediri, Fouad 01 January 1984 (has links) (PDF)
An analysis of the probability of error for an SCA based digital communication system is developed. QPSK modulation and the signal to noise ratio degradation due to the SCA transmission are analyzed to predict the performance of the system in terms of probability of error. The design and analysis of a QPSK model, a pseudorandom generator, and a pseudorandom correlator are also presented in this thesis.

Codes of power : Dimensional semiotics and photonic perspectives

Tong, Deborah Grace. January 1999 (has links)
No description available.

Electronic Mitigation of Polarization Mode Dispersion

Poirrier, Julien 23 August 2000 (has links)
Polarization Mode Dispersion induces polarization dependent propagation. Consequently it generates a multiple imaging of the light pulse carrying the information. Its first order appears as a dual path fading channel of Maxwellian statistics. It results in harmful impairments that prevent the upgrade and installation of high bit-rate systems. The random process PMD exhibits a strong frequency dependence, so that its amelioration requires channel by channel, non-linear, adaptive mitigation. Electronic mitigation appears as a very attractive solution to overcome the limit set by the PMD. Consequently, we considered the implementation of these solutions at the receiver in the electrical domain. We verified that these linear and non-linear equalization techniques can greatly reduce the power penalty due to PMD. Equalization's performance depends highly on the type of systems considered. For the two main types of systems: thermal noise limited systems and systems exhibiting ASE (systems using optical amplifiers), we demonstrated and quantified the induced improvement (measured as power penalty reduction). The most sophisticated technique that we considered (NLC+FDE) handles any kind of first order PMD within a 4 dB margin in the thermal noise limit. This extended to a 11 dB margin in the presence of ASE. This comes from the limitation set by the signal dependence of the noise. In fact, these DSP techniques do a better job at reducing very high penalty. Consequently, for a power and ISI limited link, it may be required to associate to electronic solutions optical compensation in order to reach acceptable performance. On the other hand, for links having large power margin or exhibiting reasonable PMD, electronic techniques appear as an easy, inexpensive and convenient solution. We derived in this work the bounds to NLC performance in the presence of ASE. Therefore, we extended the usual results of the thermal noise limit to the particular case of signal dependent noise. We also made clear that optical systems, because of their noise specificities can not be studied or designed as others links. Notions such as eye opening, SNR and ISI need to be carefully defined and adapted to this case. We have provided in this work PMD dependent power penalty map for known systems. Given the link's statistics and characteristics, one can determine, following our structure, which mitigation techniques allow upgrade. / Master of Science


Eng, Thomas 10 1900 (has links)
International Telemetering Conference Proceedings / October 27-30, 1997 / Riviera Hotel and Convention Center, Las Vegas, Nevada / The relative anti-jam (AJ) performance of several diversity combiners are investigated. The modulation is 8-ary frequency-shift-keying (FSK), the demodulation process consists of energy detection of the eight frequency bins at each hop and the subsequent combining of detector outputs. Three combiners are considered : the linear combiner, where the detector outputs of each hop (corresponding to the same frequency bin) are summed without any processing; the self-normalized combiner, where the eight detector outputs of any particular hop are normalized so that they add to unity; and the max-normalized combiner, where the eight detector outputs of any hop are divided by the maximum value among those eight outputs. Results indicate that under worst-case tone jamming, the selfnormalized combiner performs the best, the max-normalized combiner second best, and the linear combiner performs the worst among the three.

Exploring Radio Frequency Positioning Methods and Their Use in Determining User Context in Public Spaces

Guercio, Remy 01 January 2016 (has links)
RF positioning methods have various tradeoffs that make them suitable for differing applications. This thesis identifies the most prominent positioning methods and deter-mines their suitability for context aware applications in pub-lic spaces using a number of different factors. This thesis first explores the physical characteristics of GPS, GSM, 802.11 and Bluetooth focusing on coverage and accuracy in both a historical and forward looking context. Next, it explores what it means for an application to be context aware and how that translates into building applications that are used in the context of public places. This thesis then reflects on the intersection of the two and explores some challenges related to practical implementations. In order to further explore these challenges, it assesses a high accuracy use case of merging Bluetooth positioning with augmented reality and virtual reality applications. We find that in the last decade Bluetooth has made rapid advancements in relation to competing technologies, but it is still far from ideal in all situations, especially when the situation requires extremely high accuracy.

Concatenation of punctured convolutional codes.

Bienz, Richard Alan. January 1992 (has links)
The cascading or concatenation of error control codes is a well-established technique in digital communications. This type of code can yield excellent bit error rate performance. Concatenated codes that contain short memory convolutional codes are applicable to many communication links. The applications include the various combinations of modulations with memory, channels with memory and coding with memory. The Viterbi decoder is the decoder of choice for these concatenated coding schemes. Unfortunately, Viterbi decoders produce only hard decisions. The Viterbi decoders near the channel (inner decoders) therefore do not send all the available symbol information (soft decisions) to the outer decoders. Also, there are no practical decoders that produce this symbol information. The result is an unrealized coding gain. The principal contribution of this dissertation is to present a new decoder design that can be used as an inner decoder in a concatenated convolutional coding scheme. This decoder is a modified Viterbi decoder that generates soft decisions. The decoder has been named the Maximum Likelihood Paths Comparison (MLPC) decoder. The MLPC decoder uses a subset of the operations performed by a normal Viterbi decoder and therefore it is practical. The performance of the new decoder in a communication link is determined by simulation. The link uses a concatenated code that contains two convolutional codes. Both codes have a base code constraint length of 7 and rates of 1/2. The outer code is punctured to a few higher rates. Various results from these simulations are presented. The bit error rate performance of the code is excellent. The code performance also matches the theoretical upper bit error rate bound very closely for the signal-noise-ratios simulated. The complexity of the overall concatenated code system is compared to the complexity of a single convolutional code with equal performance. Using certain reasonable assumptions, the complexity of the concatenated code is roughly an order of magnitude less than the complexity of the single code.


Ramaswamy, Prem. January 1983 (has links)
No description available.

Page generated in 0.1053 seconds