291 |
Orthogonal frequency division multiplexing (OFDM) implementation as part of a software devined radio (SDR) environmentSonntag, Christoph 12 1900 (has links)
Thesis (MScEng (Electrical and Electronic Engineering))--University of Stellenbosch, 2005. / Orthogonal Frequency Division Multiplexing (ODFM) has gained considerable attention the past couple of years. In our modern world the need for faster data transmission is never-ending. OFDM modulation provides us with a way of more densely packing modulated carriers in the frequency domain than other existing Frequency Multiplexing schemes, thus achieving higher data rates through communications channels.
Software Defined Radio (SDR) creates a very good entry point for designing any communications system. SDR is an architecture that aims to minimise hardware components in electronic communications circuits by doing all possible processing in the software domain. Such systems have many advantages over existing hardware implementations and can be executed on various platforms and embedded systems, given that the appropriate analogue front ends are attached to the system.
|
292 |
Frequency synchronization methods for digital broadband receivers雷靜, Lei, Jing. January 2002 (has links)
published_or_final_version / Electrical and Electronic Engineering / Doctoral / Doctor of Philosophy
|
293 |
Fiber-based nonlinear photonic processor: a versatile platform for optical communication signal processingKuo, Ping-piu., 郭炳彪. January 2008 (has links)
published_or_final_version / Electrical and Electronic Engineering / Master / Master of Philosophy
|
294 |
Design and analysis of cooperative and non-cooperative resource management algorithms in high performance wireless systemsKong, Zhen., 孔振. January 2008 (has links)
published_or_final_version / Electrical and Electronic Engineering / Doctoral / Doctor of Philosophy
|
295 |
Mobile Satellite Broadcast and Multichannel Communications : analysis and designMartin, Cristoff January 2005 (has links)
<p>In this thesis, analytical analysis and design techniques for wireless communications with diversity are studied. The impact of impairments such as correlated fading is analyzed using statistical models. Countermeasures designed to overcome, or even exploit, such effects are proposed and examined. In particular two applications are considered, satellite broadcast to vehicular terminals and communication using transmitters and receivers equipped with multiple antennas.</p><p>Mobile satellite broadcast systems offer the possibility of high data rate services with reliability and ubiquitous coverage. The design of system architectures providing such services requires complex trade-offs involving technical, economical, and regulatory aspects. A satisfactory availability can be ensured using space, terrestrial, and time diversity techniques. The amount of applied diversity affects the spectral efficiency and system performance. Also, dedicated satellite and terrestrial networks represent significant investments and regulatory limitations may further complicate system design.</p><p>The work presented in this thesis provides insights to the technical</p><p>aspects of the trade-offs above. This is done by deriving an efficient method for estimating what resources in terms of spectrum and delay are required for a broadcast service to reach a satisfactory number of end users using a well designed system. The results are based on statistical models of the mobile satellite channel for which efficient analytical design and error rate estimation methods are derived. We also provide insight to the achievable spectral efficiency using different transmitter and receiver configurations.</p><p>Multiple-element antenna communication is a promising technology for future high speed wireless infrastructures. By adding a spatial dimension, radio resources in terms of transmission power and spectrum can be used more efficiently. Much of the design and analysis work has focused on cases where the transmitter either has access to perfect channel state information or it is blind and the spatial channels are uncorrelated.</p><p>Herein, systems where the fading of the spatial channels is correlated and/or the transmitter has access to partial channel state information are considered. While maintaining perfect channel knowledge at the transmitter may prove difficult, updating parameters that change on a slower time scale could be realistic. Here we formulate analysis and design techniques based on statistical models of the multichannel propagation. Fundamental properties of the multi-element antenna channel and limitations given by information theory are investigated under an asymptotic assumption on the number of antennas on either side of the system. For example, limiting normal distributions are derived for the squared singular values of the channel matrix and the mutual information. We also propose and examine a practical scheme capable of exploiting partial channel state information.</p><p>In both applications outlined above, by using statistical models of the channel characteristics in the system design, performance can be improved. The main contribution of this thesis is the development of efficient techniques for estimating the system performance in different scenarios. Such techniques are vital to obtain insights to the impact of different impairments and how countermeasures against these should be designed.</p>
|
296 |
Linear transceivers for MIMO relaysShang, Cheng Yu Andy January 2014 (has links)
Relays can be used in wireless communication systems to provide cell coverage extension, reduce coverage holes and increase throughput. Full duplex (FD) relays, which transmit and receive in the same time slot, can have a higher transmission rate compared with half duplex (HD) relays. However, FD relays suffer from self interference (SI) problems, which are caused by the transmitted relay signal being received by the relay receiver. This can reduce the performance of FD relays. In the literature, the SI channel is commonly nulled and removed as it simplifies the problem considerably. In practice, complete nulling is impossible due to channel estimation errors. Therefore, in this thesis, we consider the leakage of the SI from the FD relay. Our goal is to reduce the SI and increase the signal to noise ratio (SNR) of the relay system. Hence, we propose different precoder and weight vector designs. These designs may increase the end to end (e2e) signal to interference and noise ratio (SINR) at the destination. Here, a precoder is multiplied to a signal before transmission and a weight vector is multiplied to the received signal after reception.
Initially, we consider an academic example where it uses a two path FD multiple input and multiple output (MIMO) system. The analysis of the SINR with the implementation of precoders and weight vectors shows that the SI component has the same underlying signal as the source signal when a relay processing delay is not being considered. Hence, to simulate the SI problem more realistically, we alter our relay design and focus on a one path FD MIMO relay system with a relay processing delay. For the implementation of precoders and weight vectors, choosing the optimal scheme is numerically challenging. Thus, we design the precoders and weight vectors using ad-hoc and near-optimal schemes. The ad-hoc schemes for the precoders are singular value decomposition (SVD), minimising the signal to leakage plus noise ratio (SLNR) using the Rayleigh Ritz (RR) method and zero forcing (ZF). The ad-hoc schemes for the weight vectors are SVD, minimum mean squared error (MMSE) and ZF. The near-optimal scheme uses an iterative RR method to compute the source precoder and destination weight vector and the relay precoder and weight vector are computed using the ad-hoc methods which provide the best performance.
The average power and the instantaneous power normalisations are the two methods to constrain the relay precoder power. The average power normalisation method uses a novel closed form covariance matrix with an optimisation approach to constrain the relay precoder. This closed form covariance matrix is mathematically derived using matrix vectorization techniques. For the instantaneous power normalisation method, the constraint process does not require an optimisation approach. However, using this method the e2e SINR is difficult to calculate, therefore we use symbol error rate (SER) as a measure of performance.
The results from the different precoder and weight vector designs suggest that reducing the SI using the relay weight vector instead of the relay precoder results in a higher e2e SINR. Consequently, to increase the e2e SINR, performing complicated processing at the relay receiver is more effective than at the relay transmitter.
|
297 |
The Architecture and Design of Parallel Processing for Real-Time Multiplexing Telemetry DataJun, Zhang, Qishan, Zhang 10 1900 (has links)
International Telemetering Conference Proceedings / October 26-29, 1992 / Town and Country Hotel and Convention Center, San Diego, California / The parallel processing technology has been widely applied to many science and engineering technical fields, also to telemetry. In particular, telemetry develops towards the trend of large capacity, high rate, several data streams and programmable formats. This sets a still higher demand on processing for real-time multilexing telemetry data. On the basis of analyzing of the characteristics of telemetry data processing (TDP), the parallel processing conception and methods are adopted, countering multiiple-channel data streams of different objects, several architectures of parallel processing for real-time multiplexing telemetry data are presented. It makes better use of the concurrency during the process of TDP and handles the telemetry information effectively in every processing level of the whole telemetering information processing system. The paper shows the property comparison of these parallel processing architectures and main features too. Experiments have indicated that it is an economical and effective method to improve the performance of telemetry information processing system by using paralle processing architecture which is based on concurrency of telemetry data processing.
|
298 |
Combating client fingerprinting through the real-time detection and analysis of tailored web contentBorn, Kenton P. January 1900 (has links)
Doctor of Philosophy / Department of Computing Science / David Gustafson / The web is no longer composed of static resources. Technology and demand have driven the web towards a complex, dynamic model that tailors content toward specific client fingerprints. Servers now commonly modify responses based on the browser, operating system, or location of the connecting client. While this information may be used for legitimate purposes, malicious adversaries can also use this information to deliver misinformation or tailored exploits. Currently, there are no tools that allow a user to detect when a response contains tailored content.
Developing an easily configurable multiplexing system solved the problem of detecting tailored web content. In this solution, a custom proxy receives the initial request from a client, duplicating and modifying it in many ways to change the browser, operating system, and location-based client fingerprint. All of the requests with various client fingerprints are simultaneously sent to the server. As the responses are received back at the proxy, they are aggregated and analyzed against the original response. The results of the analysis are then sent to the user along with the original response. This process allowed the proxy to detect tailored content that was previously undetectable through casual browsing.
Theoretical and empirical analysis was performed to ensure the multiplexing proxy detected tailored content at an acceptable false alarm rate. Additionally, the tool was analyzed for its ability to provide utility to open source analysts, cyber analysts, and reverse engineers. The results showed that the proxy is an essential, scalable tool that provides capabilities that were not previously available.
|
299 |
Channel estimation techniques for filter bank multicarrier based transceivers for next generation of wireless networksIjiga, Owoicho Emmanuel January 2017 (has links)
A dissertation submitted to Faculty of Engineering and the Built Environment, University of the Witwatersrand, Johannesburg, in fulfillment of the requirements for the degree of Master of Science in Engineering (Electrical and Information Engineering), August 2017 / The fourth generation (4G) of wireless communication system is designed based on the principles of cyclic prefix orthogonal frequency division multiplexing (CP-OFDM) where the cyclic prefix (CP) is used to combat inter-symbol interference (ISI) and inter-carrier interference (ICI) in order to achieve higher data rates in comparison to the previous generations of wireless networks. Various filter bank multicarrier systems have been considered as potential waveforms for the fast emerging next generation (xG) of wireless networks (especially the fifth generation (5G) networks). Some examples of the considered waveforms are orthogonal frequency division multiplexing with offset quadrature amplitude modulation based filter bank, universal filtered multicarrier (UFMC), bi-orthogonal frequency division multiplexing (BFDM) and generalized frequency division multiplexing (GFDM). In perfect reconstruction (PR) or near perfect reconstruction (NPR) filter bank designs, these aforementioned FBMC waveforms adopt the use of well-designed prototype filters (which are used for designing the synthesis and analysis filter banks) so as to either replace or minimize the CP usage of the 4G networks in order to provide higher spectral efficiencies for the overall increment in data rates. The accurate designing of the FIR low-pass prototype filter in NPR filter banks results in minimal signal distortions thus, making the analysis filter bank a time-reversed version of the corresponding synthesis filter bank. However, in non-perfect reconstruction (Non-PR) the analysis filter bank is not directly a time-reversed version of the corresponding synthesis filter bank as the prototype filter impulse response for this system is formulated (in this dissertation) by the introduction of randomly generated errors. Hence, aliasing and amplitude distortions are more prominent for Non-PR.
Channel estimation (CE) is used to predict the behaviour of the frequency selective channel and is usually adopted to ensure excellent reconstruction of the transmitted symbols. These techniques can be broadly classified as pilot based, semi-blind and blind channel estimation schemes. In this dissertation, two linear pilot based CE techniques namely the least square (LS) and linear minimum mean square error (LMMSE), and three adaptive channel estimation schemes namely least mean square (LMS), normalized least mean square (NLMS) and recursive least square (RLS) are presented, analyzed and documented. These are implemented while exploiting the near orthogonality properties of offset quadrature amplitude modulation (OQAM) to mitigate the effects of interference for two filter bank waveforms (i.e. OFDM/OQAM and GFDM/OQAM) for the next generation of wireless networks assuming conditions of both NPR and Non-PR in slow and fast frequency selective Rayleigh fading channel. Results obtained from the computer simulations carried out showed that the channel estimation schemes performed better in an NPR filter bank system as compared with Non-PR filter banks. The low performance of Non-PR system is due to the amplitude distortion and aliasing introduced from the random errors generated in the system that is used to design its prototype filters. It can be concluded that RLS, NLMS, LMS, LMMSE and LS channel estimation schemes offered the best normalized mean square error (NMSE) and bit error rate (BER) performances (in decreasing order) for both waveforms assuming both NPR and Non-PR filter banks.
Keywords: Channel estimation, Filter bank, OFDM/OQAM, GFDM/OQAM, NPR, Non-PR, 5G, Frequency selective channel. / CK2018
|
300 |
Optimal chunk-based resource allocation for OFDMA systems with multiple BER requirementsUnknown Date (has links)
In wireless orthogonal frequency division multiple-access (OFDMA) standards,
subcarriers are grouped into chunks and a chunk of subcarriers is made as the minimum allocation unit for subcarrier allocation. We investigate the chunk-based resource allocation for OFDMA downlink, where data streams contain packets with diverse bit-errorrate (BER) requirements. Supposing that adaptive transmissions are based on a number of discrete modulation and coding modes, we derive the optimal resource allocation scheme that maximizes the weighted sum of average user rates under the multiple BER and total power constraints. With proper formulation, the relevant optimization problem is cast as an integer linear program (ILP). We can rigorously prove that the zero duality gap holds for the formulated ILP and its dual problem. Furthermore, it is shown that the optimal strategy for this problem can be obtained through Lagrange dual-based gradient iterations with fast convergence and low computational complexity per iteration. Relying on the stochastic optimization tools, we further develop a novel on-line algorithm capable of dynamically learning the underlying channel distribution and asymptotically approaching the optimal strategy without knowledge of intended wireless channels a priori. In addition, we extend the proposed approach to maximizing the a-fair utility functions of average user rates, and show that such a utility maximization can nicely balance the trade-off between the total throughput and fairness among users. / Includes bibliography. / Dissertation (Ph.D.)--Florida Atlantic University, 2014. / FAU Electronic Theses and Dissertations Collection
|
Page generated in 0.0581 seconds