• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 393
  • 123
  • 63
  • 54
  • 49
  • 24
  • 23
  • 18
  • 12
  • 10
  • 9
  • 8
  • 6
  • 5
  • 5
  • Tagged with
  • 913
  • 324
  • 300
  • 186
  • 157
  • 155
  • 149
  • 143
  • 127
  • 114
  • 90
  • 86
  • 84
  • 82
  • 74
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
431

Multihop Concept in Cellular Systems

Rangineni, Kiran January 2008 (has links)
<p>We are very thirsty in terms of everything to fulfil our needs in a sophisticated way, and this leads me choose the so called master thesis titled “Multihop Concept in Cellular Systems”.</p><p>This thesis introduces an approach towards the integration of relaying or multihop scheme in the next generation of cellular networks. In a multihop cellular architecture, the users send their data to the base station via relay station or with direct communication to the base station. These relay stations can either be the nomadic, fixed at specific location or users’ mobile station (i.e. mobile relay station). The main objective of this paper is to compare the difference between the relaying network architecture with different channel bandwidth as well as their performance gain. For this we integrate the relay station into conventional cellular networks using IEEE 802.16j (One of the standard introduced relay station concept in WiMAX) OFDMA (Orthogonal Frequency Division Multiple Access is a transmission technique that is based on many orthogonal subchannels (set of carriers) that transmits simultaneously). The results show that under certain conditions the throughput and coverage of the system has been increased with the introduction of the relay station in to cellular base station zone.</p>
432

Frekvensstörningari IEEE 802.11b nätverk

Envik, Richard, Kullberg, Niclas, Johansson, Martin January 2003 (has links)
No description available.
433

Architectures and Algorithms for Future Wireless Local Area Networks

Dely, Peter January 2012 (has links)
Future Wireless Local Area Networks (WLANs) with high carrier frequencies and wide channels need a dense deployment of Access Points (APs) to provide good performance. In densely deployed WLANs associations of stations and handovers need to be managed more intelligently than today. This dissertation studies when and how a station should perform a handover and to which AP from a theoretical and a practical perspective. We formulate and solve optimization problems that allow to compute the optimal AP for each station in normal WLANs and WLANs connected via a wireless mesh backhaul. Moreover, we propose to use software defined networks and the OpenFlow protocol to optimize station associations, handovers and traffic rates. Furthermore, we develop new mechanisms to estimate the quality  of a link  between a station and an AP. Those mechanisms allow optimization algorithms to make better decisions about when to initiate a handover. Since handovers in today’s WLANs are slow and may disturb real-time applications such as video streaming, a faster procedure is developed in this thesis. Evaluation results from wireless testbeds and network simulations show that our architectures and algorithms significantly increase the performance of WLANs, while they are backward compatible at the same time.
434

Design of Multi-Code Rate LDPC Decoder for IEEE 802.16e Standard

Hsiao, Chih-hao 28 August 2007 (has links)
This thesis presents a novel VLSI design of multi-code rate Low-Density Parity-Check code (LDPC) decoder for IEEE 802.16e standard. In order to support the different code rates adopted by the standard, this thesis proposes a programmable LDPC decoder architecture based on the edge-serial approach. This edge-serial architecture can perform the sequential check-node computation according to the internal sequence update commands. Any complex and irregular parity-check matrix can all be realized in the proposed architecture if the number of bit-nodes each check node connects does not exceed a certain bound. In addition to the high flexibility, this thesis also proposes several design optimization techniques suitable for the LDPC decoder. First, the designs of the LDPC decoders in the past all put more emphasis on the realization of check node function. This thesis instead applies a novel bit-node major approach which can lead to more compact design. Secondly, a fine-grain message update method is used which allows more rapid message passing such that the decoder can converge in less cycles. In addition, almost half of the message memory can be reduced. Furthermore, based on the bit-node major decoder design, the early termination scheme can be utilized to partially terminate the function of some bit nodes to reduce the decoding cycles. The other salient features also include the rescheduling of the message update order to allow the overlap of different decoding iterations in order to reduce effect of the possible message update hazard due to the long internal pipeline latency. Based on the proposed optimization methods, our experimental results show that the hardware cost can be reduced by 23.1% while the decoding cycles can be reduced by 27.4%. The proposed LDPC decoder architecture has been realized by using 0.18 &#x00B5;m technology with the total gate count of 316k. Our experimental shows that the proposed LDPC decoder can run up to 235 MHz and deliver the average of 116 Mbps throughput.
435

From Interoperability to Harmonization in Metadata Standardization : Designing an Evolvable Framework for Metadata Harmonization

Nilsson, Mikael January 2010 (has links)
Metadata is an increasingly central tool in the current web environment, enabling large-scale, distributed management of resources. Recent years has seen a growth in interaction between previously relatively isolated metadata communities, driven by a need for cross-domain collaboration and exchange. However, metadata standards have not been able to meet the needs of interoperability between independent standardization communities. For this reason the notion of metadata harmonization, defined as interoperability of combinations of metadata specifications, has risen as a core issue for the future of web-based metadata. This thesis presents a solution-oriented analysis of current issues in metadata harmonization. A set of widely used metadata specifications in the domains of learning technology, libraries and the general web environment have been chosen as targets for the analysis, with a special focus on Dublin Core, IEEE LOM and RDF. Through active participation in several metadata standardization communities, a body of knowledge of harmonization issues has been developed. The thesis presents an analytical framework of concepts and principles for understanding the issues arising when interfacing multiple standardization communities. The analytical framework focuses on a set of important patterns in metadata specifications and their respective contribution to harmonization issues: Metadata syntaxes as a tool for metadata exchange. Syntaxes are shown to be of secondary importance in harmonization. Metadata semantics as a cornerstone for interoperability. This thesis argues that the incongruences in the interpretation of metadata descriptions play a significant role in harmonization. Abstract models for metadata as a tool for designing metadata standards. It is shown how such models are pivotal in the understanding of harmonization problems. Vocabularies as carriers of meaning in metadata. The thesis shows how portable vocabularies can carry semantics from one standard to another, enabling harmonization. Application profiles as a method for combining metadata standards. While application profiles have been put forward as a powerful tool for interoperability, the thesis concludes that they have only a marginal role to play in harmonization. The analytical framework is used to analyze and compare seven metadata specifications, and a concrete set of harmonization issues is presented. These issues are used as a basis for a metadata harmonization framework where a multitude of metadata specifications with different characteristics can coexist. The thesis concludes that the Resource Description Framework (RDF) is the only existing specification that has the right characteristics to serve as a practical basis for such a harmonization framework, and therefore must be taken into account when designing metadata specifications. Based on the harmonization framework, a best practice for metadata standardization development is developed, and a roadmap for harmonization improvements of the analyzed standards is presented. / QC 20101117
436

Design of Efficient MAC Protocols for IEEE 802.15.4-based Wireless Sensor Networks

Khanafer, Mounib 01 May 2012 (has links)
Wireless Sensor Networks (WSNs) have enticed a strong attention in the research community due to the broad range of applications and services they support. WSNs are composed of intelligent sensor nodes that have the capabilities to monitor different types of environmental phenomena or critical activities. Sensor nodes operate under stringent requirements of scarce power resources, limited storage capacities, limited processing capabilities, and hostile environmental surroundings. However, conserving sensor nodes’ power resources is the top priority requirement in the design of a WSN as it has a direct impact on its lifetime. The IEEE 802.15.4 standard defines a set of specifications for both the PHY layer and the MAC sub-layer that abide by the distinguished requirements of WSNs. The standard’s MAC protocol employs an intelligent backoff algorithm, called the Binary Exponent Backoff (BEB), that minimizes the drainage of power in these networks. In this thesis we present an in-depth study of the IEEE 802.15.4 MAC protocol to highlight both its strong and weak aspects. We show that we have enticing opportunities to improve the performance of this protocol in the context of WSNs. We propose three new backoff algorithms, namely, the Standby-BEB (SB-BEB), the Adaptive Backoff Algorithm (ABA), and the Priority-Based BEB (PB-BEB), to replace the standard BEB. The main contribution of the thesis is that it develops a new design concept that drives the design of efficient backoff algorithms for the IEEE 802.15.4-based WSNs. The concept dictates that controlling the algorithms parameters probabilistically has a direct impact on enhancing the backoff algorithm’s performance. We provide detailed discrete-time Markov-based models (for AB-BEB and ABA) and extensive simulation studies (for the three algorithms) to prove the superiority of our new algorithms over the standard BEB.
437

Överföring av digital video via FireWire / Transmission of Digital Video through FireWire

Andersson, Peter January 2002 (has links)
Transmission of digital signals is today more frequently used than transmission of analog signals. One reason for this is that a digital signal is less sensitive to noise than an analog, another reason is that almost all signals today are handled in a digital format. This thesis describes the development of a system that receives digital video signals through FireWire. The standard for FireWire, which is a high performance serial bus, is under development. Today the standard of the bus supports transmission of data with a speed of up to 400 Mbit/s. In the future FireWire is supposed to transmit data with a speed of up to 3,2 Gbit/s. The thesis gives an introduction to the technique for FireWire and how it is implemented. It also includes a short description of digital video signals in DVCAM format.
438

A Characterization of Wireless Network Interface Card Active Scanning Algorithms

Gupta, Vaibhav 04 December 2006 (has links)
In this thesis, we characterize the proprietary active scanning algorithm of several wireless network interface cards. Our experiments are the first of its kind to observe the complete scanning process as the wireless network interface cards probe all the channels in the 2.4GHz spectrum. We discuss the: 1) correlation of channel popularity during active scanning and access point channel deployment popularity; 2) number of probe request frames statistics on each channel; 3) channel probe order; and 4) dwell time. The knowledge gained from characterizing wireless network interface cards is important for the following reasons: 1) it helps one understand how active scanning is implemented in different hardware and software; 2) it can be useful in identifying a wireless rogue host; 3) it can help implement Active Scanning in network simulators; and 4) it can radically influence research in the familiar fields like link-layer handovers and effective deployment of access points.
439

An equalization technique for high rate OFDM systems

Yuan, Naihua 05 December 2003
In a typical orthogonal frequency division multiplexing (OFDM) broadband wireless communication system, a guard interval using cyclic prefix is inserted to avoid the inter-symbol interference and the inter-carrier interference. This guard interval is required to be at least equal to, or longer than the maximum channel delay spread. This method is very simple, but it reduces the transmission efficiency. This efficiency is very low in the communication systems, which inhibit a long channel delay spread with a small number of sub-carriers such as the IEEE 802.11a wireless LAN (WLAN). To increase the transmission efficiency, it is usual that a time domain equalizer (TEQ) is included in an OFDM system to shorten the effective channel impulse response within the guard interval. There are many TEQ algorithms developed for the low rate OFDM applications such as asymmetrical digital subscriber line (ADSL). The drawback of these algorithms is a high computational load. Most of the popular TEQ algorithms are not suitable for the IEEE 802.11a system, a high data rate wireless LAN based on the OFDM technique. In this thesis, a TEQ algorithm based on the minimum mean square error criterion is investigated for the high rate IEEE 802.11a system. This algorithm has a comparatively reduced computational complexity for practical use in the high data rate OFDM systems. In forming the model to design the TEQ, a reduced convolution matrix is exploited to lower the computational complexity. Mathematical analysis and simulation results are provided to show the validity and the advantages of the algorithm. In particular, it is shown that a high performance gain at a data rate of 54Mbps can be obtained with a moderate order of TEQ finite impulse response (FIR) filter. The algorithm is implemented in a field programmable gate array (FPGA). The characteristics and regularities between the elements in matrices are further exploited to reduce the hardware complexity in the matrix multiplication implementation. The optimum TEQ coefficients can be found in less than 4µs for the 7th order of the TEQ FIR filter. This time is the interval of an OFDM symbol in the IEEE 802.11a system. To compensate for the effective channel impulse response, a function block of 64-point radix-4 pipeline fast Fourier transform is implemented in FPGA to perform zero forcing equalization in frequency domain. The offsets between the hardware implementations and the mathematical calculations are provided and analyzed. The system performance loss introduced by the hardware implementation is also tested. Hardware implementation output and simulation results verify that the chips function properly and satisfy the requirements of the system running at a data rate of 54 Mbps.
440

An equalization technique for high rate OFDM systems

Yuan, Naihua 05 December 2003 (has links)
In a typical orthogonal frequency division multiplexing (OFDM) broadband wireless communication system, a guard interval using cyclic prefix is inserted to avoid the inter-symbol interference and the inter-carrier interference. This guard interval is required to be at least equal to, or longer than the maximum channel delay spread. This method is very simple, but it reduces the transmission efficiency. This efficiency is very low in the communication systems, which inhibit a long channel delay spread with a small number of sub-carriers such as the IEEE 802.11a wireless LAN (WLAN). To increase the transmission efficiency, it is usual that a time domain equalizer (TEQ) is included in an OFDM system to shorten the effective channel impulse response within the guard interval. There are many TEQ algorithms developed for the low rate OFDM applications such as asymmetrical digital subscriber line (ADSL). The drawback of these algorithms is a high computational load. Most of the popular TEQ algorithms are not suitable for the IEEE 802.11a system, a high data rate wireless LAN based on the OFDM technique. In this thesis, a TEQ algorithm based on the minimum mean square error criterion is investigated for the high rate IEEE 802.11a system. This algorithm has a comparatively reduced computational complexity for practical use in the high data rate OFDM systems. In forming the model to design the TEQ, a reduced convolution matrix is exploited to lower the computational complexity. Mathematical analysis and simulation results are provided to show the validity and the advantages of the algorithm. In particular, it is shown that a high performance gain at a data rate of 54Mbps can be obtained with a moderate order of TEQ finite impulse response (FIR) filter. The algorithm is implemented in a field programmable gate array (FPGA). The characteristics and regularities between the elements in matrices are further exploited to reduce the hardware complexity in the matrix multiplication implementation. The optimum TEQ coefficients can be found in less than 4µs for the 7th order of the TEQ FIR filter. This time is the interval of an OFDM symbol in the IEEE 802.11a system. To compensate for the effective channel impulse response, a function block of 64-point radix-4 pipeline fast Fourier transform is implemented in FPGA to perform zero forcing equalization in frequency domain. The offsets between the hardware implementations and the mathematical calculations are provided and analyzed. The system performance loss introduced by the hardware implementation is also tested. Hardware implementation output and simulation results verify that the chips function properly and satisfy the requirements of the system running at a data rate of 54 Mbps.

Page generated in 0.0564 seconds