• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 116
  • 18
  • 17
  • 15
  • 10
  • 6
  • 5
  • 2
  • 2
  • 2
  • Tagged with
  • 227
  • 227
  • 67
  • 63
  • 34
  • 32
  • 30
  • 30
  • 30
  • 28
  • 27
  • 25
  • 25
  • 25
  • 25
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
111

Blind Acquisition of Short Burst with Per-Survivor Processing (PSP)

Mohammad, Maruf H. 13 December 2002 (has links)
This thesis investigates the use of Maximum Likelihood Sequence Estimation (MLSE) in the presence of unknown channel parameters. MLSE is a fundamental problem that is closely related to many modern research areas like Space-Time Coding, Overloaded Array Processing and Multi-User Detection. Per-Survivor Processing (PSP) is a technique for approximating MLSE for unknown channels by embedding channel estimation into the structure of the Viterbi Algorithm (VA). In the case of successful acquisition, the convergence rate of PSP is comparable to that of the pilot-aided RLS algorithm. However, the performance of PSP degrades when certain sequences are transmitted. In this thesis, the blind acquisition characteristics of PSP are discussed. The problematic sequences for any joint ML data and channel estimator are discussed from an analytic perspective. Based on the theory of indistinguishable sequences, modifications to conventional PSP are suggested that improve its acquisition performance significantly. The effect of tree search and list-based algorithms on PSP is also discussed. Proposed improvement techniques are compared for different channels. For higher order channels, complexity issues dominate the choice of algorithms, so PSP with state reduction techniques is considered. Typical misacquisition conditions, transients, and initialization issues are reported. / Master of Science
112

Self-interference Handling in OFDM Based Wireless Communication Systems

Yücek, Tevfik 14 November 2003 (has links)
Orthogonal Frequency Division Multiplexing (OFDM) is a multi-carrier modulation scheme that provides efficient bandwidth utilization and robustness against time dispersive channels. This thesis deals with self-interference, or the corruption of desired signal by itself, in OFDM systems. Inter-symbol Interference (ISI) and Inter-carrier Interference (ICI) are two types of self-interference in OFDM systems. Cyclic prefix is one method to prevent the ISI which is the interference of the echoes of a transmitted signal with the original transmitted signal. The length of cyclic prefix required to remove ISI depends on the channel conditions, and usually it is chosen according to the worst case channel scenario. Methods to find the required parameters to adapt the length of the cyclic prefix to the instantaneous channel conditions are investigated. Frequency selectivity of the channel is extracted from the instantaneous channel frequency estimates and methods to estimate related parameters, e.g. coherence bandwidth and Root-mean-squared (RMS) delay spread, are given. These parameters can also be used to better utilize the available resources in wireless systems through transmitter and receiver adaptation. Another common self-interference in OFDM systems is the ICI which is the power leakage among different sub-carriers that degrades the performance of both symbol detection and channel estimation. Two new methods are proposed to reduce the effect of ICI in symbol detection and in channel estimation. The first method uses the colored nature of ICI to cancel it in order to decrease the error rate in the detection of transmitted symbols, and the second method reduces the effect of ICI in channel estimation by jointly estimating the channel and frequency offset, a major source of ICI.
113

Online Machine Learning for Wireless Communications: Channel Estimation, Receive Processing, and Resource Allocation

Li, Lianjun 03 July 2023 (has links)
Machine learning (ML) has shown its success in many areas such as computer vision, natural language processing, robot control, and gaming. ML also draws significant attention in the wireless communication society. However, applying ML schemes to wireless communication networks is not straightforward, there are several challenges need to addressed: 1). Training data in communication networks, especially in physical and MAC layer, are extremely limited; 2). The high-dynamic wireless environment and fast changing transmission schemes in communication networks make offline training impractical; 3). ML tools are treated as black boxes, which lack of explainability. This dissertation tries to address those challenges by selecting training-efficient neural networks, devising online training frameworks for wireless communication scenarios, and incorporating communication domain knowledge into the algorithm design. Training-efficient ML algorithms are customized for three communication applications: 1). Symbol detection, where real-time online learning-based symbol detection algorithms are designed for MIMO-OFDM and massive MIMO-OFDM systems by utilizing reservoir computing, extreme learning machine, multi-mode reservoir computing, and StructNet; 2) Channel estimation, where residual learning-based offline method is introduced for WiFi-OFDM systems, and a StructNet-based online method is devised for MIMO-OFDM systems; 3) Radio resource management, where reinforcement learning-based schemes are designed for dynamic spectrum access, as well as ORAN intelligent network slicing management. All algorithms introduced in this dissertation have demonstrated outstanding performance in their application scenarios, which paves the path for adopting ML-based solutions in practical wireless networks. / Doctor of Philosophy / Machine learning (ML), which is a branch of computer science that trains machine how to learn a solution from data, has shown its success in many areas such as computer vision, natural language processing, robot control, and gaming. ML also draws significant attention in the wireless communication society. However, applying ML schemes to wireless communication networks is not straightforward, there are several challenges need to addressed: 1). Training issue: unlike areas such as computer vision where large amount of training data are available, the training data in communication systems are limited; 2). Uncertainty in generalization: ML usually requires offline training, where the ML models are trained by artificially generated offline data, with the assumption that offline training data have the same statistical property as the online testing one. However, when they are statistically different, the testing performance can not be guaranteed; 3). Lack of explainability, usually ML tools are treated as black boxes, whose behaviors can hardly be explained in an analytical way. When designed for wireless networks, it is desirable for ML to have similar levels of explainability as conventional methods. This dissertation tries to address those challenges by selecting training-efficient neural networks, devising online training frameworks for wireless communication scenarios, and incorporating communication domain knowledge into the algorithm design. Training-efficient ML algorithms are customized for three communication applications: 1). Symbol detection, which is a critical step of wireless communication receiver processing, it aims to recover the transmitted signals from the corruption of undesired wireless channel effects and hardware impairments; 2) Channel estimation, where transmitter transmits a special type of symbol called pilot whose value and position are known for the receiver, receiver estimates the underlying wireless channel by comparing the received symbols with the known pilots information; 3) Radio resource management, which allocates wireless resources such bandwidth and time slots to different users. All algorithms introduced in this dissertation have demonstrated outstanding performance in their application scenarios, which paves the path for adopting ML-based solutions in practical wireless networks.
114

Pilot-Based Channel Estimation in OFDM System

Wang, Fei 24 May 2011 (has links)
No description available.
115

Frequency-domain equalization of single carrier transmissions over doubly selective channels

Liu, Hong 14 September 2007 (has links)
No description available.
116

Extension of an Existing Simulator for Cellular Communication with Support for 5G NR : Porting of MIMO Channel Estimation Methods form a prototype to an existing Link-Level Simulator / Utökning av en Existerande Simulator för Telekommunikation med Stöd för 5G NR : Portering av Metoder för MIMO Channel Estimation från en Prototypsimulator till en Link-Level Simulator

Haj Hussein, Majed, Alnahawi, Abdulsalam January 2022 (has links)
Multiple Input Multiple Output (MIMO) and Orthogonal Frequency Division Multiplexing (OFDM) are two efficient technologies used to achieve higher data rate, lowlatency, robustness against fading used in 5G New Radio (NR). At the receiver end,the data arrives distorted due to disturbance during transfer over the wireless channel.Channel estimation is the applied technique at the receiver end to overcome this problemand mitigate the effect of the disturbance over the wireless channel. The main objective of this thesis is to port an existing channel estimator from a prototypesimulator for 5G to a complete Link-Level simulator that currently has support for 4Gtraffic. Two channel estimation algorithms have been investigated and implemented inthe Link-Level simulator based on MIMO-OFDM system. The channel estimators arethe Least Square (LS) and the Linear Minimum Mean Square Error (LMMSE). Theperformance of the channel estimators is evaluated in terms of Bit Error Rate vs Signalto Noise Ratio. The effectiveness of those implemented algorithms is evaluated using a simulation,where the results show that each channel estimation algorithm is suitable for a specificuse case and depends on channel properties and different scenarios but regardless thetime complexity, the LMMSE has better performance than the LS.
117

A study of the system impact from different approaches to link adaptation in WLAN

Perez Moreno, Kevin January 2015 (has links)
The IEEE 802.11 standards define several transmission rates that can be used at the physical layer to adapt the transmission rate to channel conditions. This dynamic adaptation attempts to improve the performance in Wireless LAN (WLAN) and hence can have impact on the Quality of Service (QoS) perceived by the users. In this work we present the design and implementation of several new link adaptation (LA) algorithms. The performance of the developed algorithms is tested and compared against some existing algorithms such as Minstrel as well as an ideal LA. The evaluation is carried out in a network system simulator that models all the pro- cedures needed for the exchange of data frames according to the 802.11 standards. Different scenarios are used to simulate various realistic conditions. In particular, the Clear Channel Assessment Threshold (CCAT) is modified in the scenarios and the impact of its modification is also assessed. The algorithms are tested under identical environments to ensure that the experiments are controllable and repeatable. For each algorithm the mean and 5th percentile throughput are measured under different traffic loads to evaluate and compare the performance of the different algorithms. The tradeoff between signaling overhead and performance is also evaluated. It was found that the proposed link adaptation schemes achieved higher mean through- put than the Minstrel algorithm. We also found that the performance of some of the proposed schemes is close to that of the ideal LA. / IEEE 802.11-standarderna definierar flera överföringshastigheter som kan användas vid det fysiska skiktet för att anpassa överföringshastigheten till kanal förhållanden. Denna dynamiska anpassning försöker förbättra prestandan i wireless LAN (WLAN) och därmed kan ha inverkan på Quality of Service (QoS) uppfattas av användarna. I detta examensarbete presenterar vi utformningen och genomförandet av flera ny link adaptation (LA) algoritmer. Prestandan hos de utvecklade algoritmer testas och jämförs med vissa befintliga algoritmer så som Minstrel liksom en ideal LA. Utvärderingen genomförs i ett nätverkssystem simulator som ger alla de förfaranden som behövs för utbyte av dataramar enligt 802.11-standarderna. Olika scenarier används för att simulera olika realistiska förhå llanden. Algoritmerna är testade under identiska miljöer för att experimenten är styrbar och repeterbar. För varje algoritm genomströmningen mättes under olika trafikbelastningar för att utvärdera och jämföra resultaten för de olika algoritmer. Den avvägning mellan signalering overhead och prestanda utvärderas också . Det konstaterades att de system som föreslå s link adaptation uppnå s högre genom- snittlig throughput än Minstrel algoritm. Vi fann också att utförandet av vissa av de föreslagna systemen är nära den av ideal LA.
118

Impact of Channel Estimation Errors on Space Time Trellis Codes

Menon, Rekha 22 January 2004 (has links)
Space Time Trellis Coding (STTC) is a unique technique that combines the use of multiple transmit antennas with channel coding. This scheme provides capacity benefits in fading channels, and helps in improving the data rate and reliability of wireless communication. STTC schemes have been primarily designed assuming perfect channel estimates to be available at the receiver. However, in practical wireless systems, this is never the case. The noisy wireless channel precludes an exact characterization of channel coefficients. Even near-perfect channel estimates can necessitate huge overhead in terms of processing or spectral efficiency. This practical concern motivates the study of the impact of channel estimation errors on the design and performance of STTC. The design criteria for STTC are validated in the absence of perfect channel estimates at the receiver. Analytical results are presented that model the performance of STTC systems in the presence of channel estimation errors. Training based channel estimation schemes are the most popular choice for STTC systems. The amount of training however, increases with the number of transmit antennas used, the number of multi-path components in the channel and a decrease in the channel coherence time. This dependence is shown to decrease the performance gain obtained when increasing the number of transmit antennas in STTC systems, especially in channels with a large Doppler spread (low channel coherence time). In frequency selective channels, the training overhead associated with increasing the number of antennas can be so large that no benefit is shown to be obtained by using STTC. The amount of performance degradation due to channel estimation errors is shown to be influenced by system parameters such as the specific STTC code employed and the number of transmit and receive antennas in the system in addition to the magnitude of the estimation error. Hence inappropriate choice of system parameters is shown to significantly alter the performance pattern of STTC. The viability of STTC in practical wireless systems is thus addressed and it is shown that that channel estimation could offset benefits derived from this scheme. / Master of Science
119

Array Processing for Mobile Wireless Communication in the 60 GHz Band

Jakubisin, Daniel J. 19 February 2013 (has links)
In 2001, the Federal Communications Commission made available a large block of spectrum known as the 60 GHz band. The 60 GHz band is attractive because it provides the opportunity of multi-Gbps data rates with unlicensed commercial use. One of the main challenges facing the use of this band is poor propagation characteristics including high path loss and strong attenuation due to oxygen absorption. Antenna arrays have been proposed as a means of combating these effects. This thesis provides an analysis of array processing for communication systems operating in the 60 GHz band. Based on measurement campaigns at 60 GHz, deterministic modeling of the channel through ray tracing is proposed. We conduct a site-specific study using ray tracing to model an outdoor and an indoor environment on the Virginia Tech campus. Because arrays are required for antenna gain and adaptability, we explore the use of arrays as a form of equalization in the presence of channel-induced intersymbol interference. The first contribution of this thesis is to establish the expected performance achieved by arrays in the outdoor environment. The second contribution is to analyze the performance of adaptive algorithms applied to array processing in mobile indoor and outdoor environments. / Master of Science
120

Parametric Estimation of Stochastic Fading Channels and Their Role in Adaptive Radios

Gaeddert, Joseph D. 24 February 2005 (has links)
The detrimental effects rapid power fluctuation has on wireless narrowband communication channels has long been a concern of the mobile radio community as appropriate channel models seek to gauge link quality. Furthermore, advances in signal processing capabilities and the desire for spectrally efficient and low power radio systems have rekindled the interest for adaptive transmission schemes, hence some method of quickly probing the link quality and/or predicting channel conditions is required. Mathematical distributions for modeling the channel profile seek to estimate fading parameters from a finite number of discrete time samples of signal amplitude. While the statistical inference of such estimators has proven to be robust to rapidly shifting channel conditions, the benefits are quickly realized at the expense of processing complexity. Furthermore, computations of the best-known estimation techniques are often iterative, tedious, and complex. This thesis takes a renewed look at estimating fading parameters for the Nakagami-m, Rice-K, and Weibull distributions, specifically by showing that the need to solve transcendental equations in the estimators can be circumvented through use of polynomial approximation in the least-squared error sense or via asymptotic series expansion which often lead to closed-form and simplified expressions. These new estimators are compared to existing ones, the performances of which are comparable while preserving a lower computational complexity. In addition, the thesis also investigates the impact knowledge of the fading profile has on systems employing adaptive switching modulation schemes by characterizing performance in terms of average bit error rates (BER) and spectral efficiency. A channel undergoing Rice-$K$ fading on top of log-normal shadowing is simulated by correlating samples of received signal amplitude according to the user's doppler speed, carrier frequency, etc. The channel's throughput and BER performances are analyzed using the above estimation techniques and compared to non-estimation assumptions. Further discussion on narrowband fading parameter estimation and its applicability to wireless communication channels is provided. / Master of Science

Page generated in 0.0409 seconds