71 |
Space Time Processing for Third Generation CDMA SystemsAlam, Fakhrul 25 November 2002 (has links)
The capacity of a cellular system is limited by two different phenomena, namely multipath fading and multiple access interference (MAI). A Two Dimensional (2-D) receiver combats both of these by processing the signal both in the spatial and temporal domain. An ideal 2-D receiver would perform joint space-time processing, but at the price of high computational complexity. In this dissertation we investigate computationally simpler technique termed as a Beamformer-Rake. In a Beamformer-Rake, the output of a beamformer is fed into a succeeding temporal processor to take advantage of both the beamformer and Rake receiver. Wireless service providers throughout the world are working to introduce the third generation (3G) cellular service that will provide higher data rates and better spectral efficiency. Wideband CDMA (WCDMA) has been widely accepted as one of the air interfaces for 3G. A Beamformer-Rake receiver can be an effective solution to provide the receivers enhanced capabilities needed to achieve the required performance of a WCDMA system. This dissertation investigates different Beamformer-Rake receiver structures suitable for the WCDMA system and compares their performance under different operating conditions. This work develops Beamformer-Rake receivers for WCDMA uplink that employ Eigen-Beamforming techniques based on the Maximum Signal to Noise Ratio (MSNR) and Maximum Signal to Interference and Noise Ratio (MSINR) criteria. Both the structures employ Maximal Ratio Combining (MRC) to exploit temporal diversity.
MSNR based Eigen-Beamforming leads to a Simple Eigenvalue problem (SE). This work investigates several algorithms that can be employed to solve the SE and compare the algorithms in terms of their computational complexity and their performance. MSINR based Eigen-Beamforming results in a Generalized Eigenvalue problem (GE). The dissertation describes several techniques to form the GE and algorithms to solve it. We propose a new low-complexity algorithm, termed as the Adaptive Matrix Inversion (AMI), to solve the GE. We compare the performance of the AMI to other existing algorithms. Comparison between different techniques to form the GE is also compared. The MSINR based beamforming is demonstrated to be superior to the MSNR based beamforming in the presence of strong interference.
There are Pilot Symbol Assisted (PSA) beamforming techniques that exploit the Minimum Mean Squared Error (MMSE) criterion. We compare the MSINR based Beamformer-Rake with the same that utilizes Direct Matrix Inversion (DMI) to perform MMSE based beamforming in terms of Bit Error Rate (BER). In a wireless system where the number of co-channel interferers is larger than the number of elements of a practical antenna array, we can not perform explicit null-steering. As a result the advantage of beamforming is partially lost. In this scenario it is better to attain diversity gain at the cost of spatial aliasing. We demonstrate this with the aid of simulation.
Orthogonal Frequency Division Multiplexing (OFDM) is a multi-carrier technique that has recently received considerable attention for high speed wireless communication. OFDM has been accepted as the standard for Digital Audio Broadcast (DAB) and Digital Video Broadcast (DVB) in Europe. It has also been established as one of the modulation formats for the IEEE 802.11a wireless LAN standard. OFDM has emerged as one of the primary candidates for the Fourth Generation (4G) wireless communication systems and high speed ad hoc wireless networks. We propose a simple pilot symbol assisted frequency domain beamforming technique for OFDM receiver and demonstrate the concept of sub-band beamforming. Vector channel models measured with the MPRG Viper test-bed is also employed to investigate the performance of the beamforming scheme. / Ph. D.
|
72 |
Adaptive Asymmetric Slot Allocation for Heterogeneous Traffic in WCDMA/TDD SystemsPark, JinSoo 29 November 2004 (has links)
Even if 3rd and 4th generation wireless systems aim to achieve multimedia services at high speed, it is rather difficult to have full-fledged multimedia services due to insufficient capacity of the systems. There are many technical challenges placed on us in order to realize the real multimedia services. One of those challenges is how efficiently to allocate resources to traffic as the wireless systems evolve. The review of the literature shows that strategic manipulation of traffic can lead to an efficient use of resources in both wire-line and wireless networks. This aspect brings our attention to the role of link layer protocols, which is to orchestrate the transmission of packets in an efficient way using given resources. Therefore, the Media Access Control (MAC) layer plays a very important role in this context.
In this research, we investigate technical challenges involving resource control and management in the design of MAC protocols based on the characteristics of traffic, and provide some strategies to solve those challenges. The first and foremost matter in wireless MAC protocol research is to choose the type of multiple access schemes. Each scheme has advantages and disadvantages. We choose Wireless Code Division Multiple Access/Time Division Duplexing (WCDMA/TDD) systems since they are known to be efficient for bursty traffic. Most existing MAC protocols developed for WCDMA/TDD systems are interested in the performance of a unidirectional link, in particular in the uplink, assuming that the number of slots for each link is fixed a priori. That ignores the dynamic aspect of TDD systems. We believe that adaptive dynamic slot allocation can bring further benefits in terms of efficient resource management. Meanwhile, this adaptive slot allocation issue has been dealt with from a completely different angle. Related research works are focused on the adaptive slot allocation to minimize inter-cell interference under multi-cell environments. We believe that these two issues need to be handled together in order to enhance the performance of MAC protocols, and thus embark upon a study on the adaptive dynamic slot allocation for the MAC protocol.
This research starts from the examination of key factors that affect the adaptive allocation strategy. Through the review of the literature, we conclude that traffic characterization can be an essential component for this research to achieve efficient resource control and management. So we identify appropriate traffic characteristics and metrics. The volume and burstiness of traffic are chosen as the characteristics for our adaptive dynamic slot allocation.
Based on this examination, we propose four major adaptive dynamic slot allocation strategies: (i) a strategy based on the estimation of burstiness of traffic, (ii) a strategy based on the estimation of volume and burstiness of traffic, (iii) a strategy based on the parameter estimation of a distribution of traffic, and (iv) a strategy based on the exploitation of physical layer information. The first method estimates the burstiness in both links and assigns the number of slots for each link according to a ratio of these two estimates. The second method estimates the burstiness and volume of traffic in both links and assigns the number of slots for each link according to a ratio of weighted volumes in each link, where the weights are driven by the estimated burstiness in each link. For the estimation of burstiness, we propose a new burstiness measure that is based on a ratio between peak and median volume of traffic. This burstiness measure requires the determination of an observation window, with which the median and the peak are measured. We propose a dynamic method for the selection of the observation window, making use of statistical characteristics of traffic: Autocorrelation Function (ACF) and Partial ACF (PACF). For the third method, we develop several estimators to estimate the parameters of a traffic distribution and suggest two new slot allocation methods based on the estimated parameters. The last method exploits physical layer information as another way of allocating slot to enhance the performance of the system.
The performance of our proposed strategies is evaluated in various scenarios. Major simulations are categorized as: simulation on data traffic, simulation on combined voice and data traffic, simulation on real trace data.
The performance of each strategy is evaluated in terms of throughput and packet drop ratio. In addition, we consider the frequency of slot changes to assess the performance in terms of control overhead.
We expect that this research work will add to the state of the knowledge in the field of link-layer protocol research for WCDMA/TDD systems. / Ph. D.
|
73 |
Simulation and Mathematical Tools for Performance Analysis of Low-Complexity ReceiversDeora, Gautam Krishnakumar 19 February 2003 (has links)
In recent years, research on the design and performance evaluation of suboptimal receiver implementations has received considerable attention owing to complexity in the realization of the optimal receiver algorithms over wireless channels. This thesis addresses the effects of using reduced complexity receivers for the Satellite Digital Audio Radio (SDAR), Code Division Multiple Access (CDMA) and UltraWideband (UWB) communications technologies.
A graphical-user-interface simulation tool has been developed to predict the link reliability performance of the SDAR services in the continental United States. Feasibility study of receiving both satellite and terrestrial repeater signals using a selection diversity (single antenna) receiver has also been performed.
The thesis also develops a general mathematical framework for studying the efficacy of a sub-optimal generalized selection combining (GSC) diversity receiver over generalized fading channel models. The GSC receiver adaptively combines a subset of M diversity paths with the highest instantaneous signal-to-noise ratios (SNR) out of the total L available diversity paths. The analytical framework is applicable for rake receiver designs in CDMA and UWB communications. / Master of Science
|
74 |
Signalakquisition in DS-Spreizspektrum-Systemen und ihre Anwendung auf den 3GPP-FDD-MobilfunkstandardZoch, André 03 November 2004 (has links) (PDF)
Robust signal acquisition is an important task in DS-SS receivers. The objective of the acquisition is to coarsely estimate the signal parameters such that the succeeding parameter tracking algorithms can be initialized. In particular, acquisition is needed to coarsely synchronize the receiver to the timing and frequency of the received signal. For this purpose mainly data aided and feedforward algorithms are applied. Using the maximum likelihood (ML) criterion, an estimator for the joint estimation of receive timing and frequency offset can be derived which determines the maximum of the Likelihood function over the whole parameter uncertainty region. Due to its high complexity the ML synchronizer is difficult to implement for practical applications. Hence, complexity reduced algorithms need to be derived. This thesis gives a systematic survey of acquisition algorithms and of performance analysis methods for analyzing such algorithms under mobile radio propagation conditions. The exploitation of multiple observations is investigated in order to improve the acquisition performance in terms of false alarm rate and acquisition time. In particular, optimal and suboptimal combining schemes for a fixed observation interval as well as sequential utilization of successive observations resulting in a variable observation length are analyzed. Another possibility to make the signal acquisition more efficient in terms of the acquisition time is to use multi stage acquisition algorithms. One class of those algorithms are the well known multiple dwell algorithms. A different approach is to design acquisition procedures in which the information about the unknown parameters is distributed among several stages such that each stage has to cope with a smaller uncertainty region in comparison to the overall parameter uncertainty. Analysis of multi stage algorithms followed by an extensive discussion of the 3GPP FDD downlink acquisition procedure as an example of a multi stage procedure with distributed information conclude the work. / Die zuverlässige Signalakquisition, die auch als Grobsynchronisation bezeichnet wird, stellt eine wichtige Aufgabe in DS-SS-Systemen dar. Das Ziel hierbei ist es, Schätzwerte fur die Übertragungsparameter derart zu bestimmen, dass die der Grobsynchronisation nachfolgende Feinsynchronisation initialisiert werden kann, d. h. dass die bestimmten Schätzwerte innerhalb des Fangbereiches der Feinsynchronisationsalgorithmen liegen. Insbesondere ist es für die Bestimmung von Synchronisationszeitpunkt und Frequenzversatz sinnvoll, eine Grobsynchronisation durchzuführen. Im Interesse einer begrenzten Komplexität sowie einer möglichst schnellen Akquisition finden vor allem datengestützte und vorwärtsverarbeitende Algorithmen Anwendung. Ausgehend vom Maximum-Likelihood-Kriterium (ML-Kriterium) können geeignete Schätzer für die gemeinsame Bestimmung von Synchronisationszeitpunkt und Frequenzversatz abgeleitet werden. Dabei ist das Maximum der Likelihood-Funktion innerhalb der Parameterunsicherheitsregion zu bestimmen. Aufgrund seiner hohen Komplexität ist der ML-Schatzer fur die Akquisition wenig geeignet; vielmehr müssen aufwandsgünstige Algorithmen mit ausreichender Leistungsfähigkeit gefunden werden. In dieser Arbeit werden verschiedene Algorithmen zur Parameterakquisition systematisierend gegenübergestellt. Weiterführend sind Verfahren zur Verbesserung des Akquisitionsverhaltens bezüglich Fehlalarm-Wahrscheinlichkeit und Akquisitionszeit unter Ausnutzung mehrfacher Beobachtung Gegenstand der Betrachtungen. Insbesondere optimale und suboptimale Verfahren mit fester Beobachtungsdauer sowie die sequentielle Auswertung aufeinander folgender Beobachtungen, bei der sich die Beobachtungsdauer nach der erreichten Entscheidungssicherheit bestimmt, werden analysiert. Als eine weitere Möglichkeit, die Signalakquisition in Bezug auf die Akquisitionszeit effizienter zu gestalten, werden mehrstufige Akquisitionsverfahren diskutiert. Es werden zum einen die häufig genutzten Mehrfach-Dwell-Algorithmen sowie mehrstufige Algorithmen mit verteilter Information betrachtet. Bei Letzteren Algorithmen wird jeder Akquisitionsstufe ein Teil der zur Synchronisation benötigten Information zugeordnet, wodurch sich die Parameter-Unsicherheit für jede einzelne Stufe verringert. Ziel hierbei ist es, durch Erhöhung der Entscheidungssicherheit der einzelnen Stufen die mittlere Akquisitionszeit zu reduzieren. Die Diskussion und die Analyse von mehrstufigen Akquisitionsverfahren bilden den Abschluss der Arbeit, wobei besonders auf die 3GPP-FDD Downlink-Akquisition als ein Beispiel fur mehrstufige Verfahren mit verteilter Information eingegangen wird.
|
75 |
Détection itérative des séquences pseudo-aléatoires / Iterative detection of pseudo-random sequencesBouvier des Noes, Mathieu 15 October 2015 (has links)
Les séquences binaires pseudo-aléatoires sont couramment employées par les systèmes de transmissions numériques ou des mécanismes de chiffrement. On les retrouve en particulier dans les transmissions par étalement de spectre par séquence direct (e.g. 3G ou GPS)) ou pour construire des séquences d'apprentissage pour faciliter la synchronisation ou l'estimation du canal (e.g. LTE). Un point commun à toutes ces applications est la nécessité de se synchroniser avec la séquence émise. La méthode conventionnelle consiste à générer la même séquence au niveau du récepteur et la corréler avec le signal reçu. Si le résultat dépasse un seuil pré-défini, la synchronisation est déclarée acquise. On parle alors de détection par corrélation.Cette thèse aborde une autre voie : la détection des séquences binaires pseudo-aléatoire par des techniques de décodage canal. Ceci permet par exemple de détecter des séquences longues (e.g. de période 242), contrairement aux techniques par corrélation qui sont trop complexes à implémenter. Cela nécessite néanmoins que le récepteur connaisse au préalable le polynôme générateur de la séquence.Nous avons montré que le décodage d'une séquence pseudo-aléatoire est une problématique du type 'détecte et décode'. Le récepteur détecte la présence de la séquence et simultanément estime son état initial. Ceci correspond dans la théorie classique de la détection à un détecteur de type GLRT qui ne connaît pas la séquence émise, mais qui connaît sa méthode de construction. L'algorithme implémente alors un GLRT qui utilise un décodeur pour estimer la séquence reçue. Ce dernier est implémenté avec un algorithme de décodage par passage de messages qui utilise une matrice de parité particulière. Elle est construite avec des équations de parités différentes, chacune ayant un poids de Hamming valant t.Il correspond au nombre de variables participants à l'équation.Les équations de parité sont un constituant indispensable du décodeur. Nous avons donné leur nombre pour les m-séquences et les séquences de Gold. Pour le cas particulier des séquences de Gold, nous avons calculé le nombre d'équations de parité de poids t=5 lorsque le degré du polynôme générateur r est impair. Ce calcul est important car il n'y a pas d'équations de parité de poids t < 5 lorsque r est impair. Le nombre d'équations de parité est aussi utilisé pour estimer le degré minimal des équations d'un poids t donné. Nous avons montré que le modèle de prédiction estime correctement la valeur moyenne du degré minimal de l'ensemble des séquences de Gold. Nous avons néanmoins mis en évidence une grande variabilité du degré minimal des séquences autour de cette valeur moyenne.Nous avons ensuite identifié les ensembles absorbants complets de plus petite taille lorsque le décodeur emploie plusieurs polynômes de parité. Ces ensembles bloquent la convergence du décodeur lorsque celui-ci est alimenté avec du bruit. Ceci évite les fausses alarmes lors du processus de détection. Nous avons montré que des cycles 'transverses' détruisent ces ensembles absorbants, ce qui génère des fausses alarmes. Nous en avons déduit un algorithme qui minimise le nombre de cycles transverses de longueur 6 et 8, ce qui minimise la probabilité de fausse alarme lorsque le poids des équations de parité vaut t=3. Notre algorithme permet de sélectionner les équations de parité qui minimisent la probabilité de fausse alarme et ainsi réduire notablement le temps d'acquisition d'une séquence de Gold.Nous avons enfin proposé deux algorithmes de détection du code d'embrouillage pour les systèmes WCDMA et CDMA2000. Ils exploitent les propriétés des m-séquences constituant les séquences de Gold, ainsi que les mécanismes de décodage par passage de messages. Ces algorithmes montrent les vulnérabilités des transmissions par étalement de spectre. / Pseudo-random binary sequences are very common in wireless transmission systems and ciphering mechanisms. More specifically, they are used in direct sequence spread spectrum transmission systems like UMTS or GPS, or to construct preamble sequences for synchronization and channel estimation purpose like in LTE. It is always required to synchronize the receiver with the transmitted sequence. The usual way consists in correlating the received signal with a replica of the sequence. If the correlation exceeds a predefined threshold, the synchronization is declared valid.This thesis addresses a different approach: the binary sequence is detected with a forward error correction decoding algorithm. This allows for instance to detect very long sequences.In this thesis, we show that decoding a pseudo-random sequence is a problematic of the kind ‘detect and decode'. The decoder detects the presence of the transmitted sequence and simultaneously estimates its initial state. In conventional detection theory, this corresponds to a GLRT detector that uses a decoder to estimate the unknown parameter which is the transmitted sequence. For pseudo-random sequences, the decoder implements an iterative message-passing algorithm. It uses a parity check matrix to define the decoding graph on which the algorithm applies. Each parity check equation has a weight t, corresponding to the number of variables in the equation.Parity check equations are thus an essential component of the decoder. The decoding procedure is known to be sensitive to the weight t of the parity check equations. For m-sequences, the number of parity check equations is already known. It is given by the number of codewords of weight t of the corresponding Hamming dual code. For Gold sequences, the number of parity check equations of weight t = 3 and 4 has already been evaluated by Kasami. In this thesis we provide an analytical expression for the number of parity check equations of weight t = 5 when the degree of the generator polynomial r is odd. Knowing this number is important because there is no parity check equation of weight t < 5 when r is odd. This enumeration is also used to provide an estimation of the least degree of parity check equations of weight t.We have then addressed the problem of selecting the parity check equations used by the decoder. We observed the probability of false alarm is very sensitive to this selection. It is explained by the presence or absence of absorbing sets which block the convergence of the decoder when it is fed only with noise. These sets are known to be responsible for error floor of LDPC codes. We give a method to identify these sets according to the parity check equations used by the decoder. The probability of false alarm can increase dramatically if these absorbing sets are destroyed. Then we propose an algorithm for selecting these parity check equations. It relies on the minimization of the number of cycles of length 6 and 8. Simulation show that the algorithm allows to improve significantly the probability of false alarm and the average acquisition time.Eventually, we propose 2 algorithms for the detection of the scrambling codes used in the uplink of UMTS-FDD and CDMA2000 systems. They highlights a new vulnerability of DSSS transmission systems. It is now conceivable to detect these transmission if the sequence's generator is known.
|
76 |
Signalakquisition in DS-Spreizspektrum-Systemen und ihre Anwendung auf den 3GPP-FDD-MobilfunkstandardZoch, André 03 May 2004 (has links)
Robust signal acquisition is an important task in DS-SS receivers. The objective of the acquisition is to coarsely estimate the signal parameters such that the succeeding parameter tracking algorithms can be initialized. In particular, acquisition is needed to coarsely synchronize the receiver to the timing and frequency of the received signal. For this purpose mainly data aided and feedforward algorithms are applied. Using the maximum likelihood (ML) criterion, an estimator for the joint estimation of receive timing and frequency offset can be derived which determines the maximum of the Likelihood function over the whole parameter uncertainty region. Due to its high complexity the ML synchronizer is difficult to implement for practical applications. Hence, complexity reduced algorithms need to be derived. This thesis gives a systematic survey of acquisition algorithms and of performance analysis methods for analyzing such algorithms under mobile radio propagation conditions. The exploitation of multiple observations is investigated in order to improve the acquisition performance in terms of false alarm rate and acquisition time. In particular, optimal and suboptimal combining schemes for a fixed observation interval as well as sequential utilization of successive observations resulting in a variable observation length are analyzed. Another possibility to make the signal acquisition more efficient in terms of the acquisition time is to use multi stage acquisition algorithms. One class of those algorithms are the well known multiple dwell algorithms. A different approach is to design acquisition procedures in which the information about the unknown parameters is distributed among several stages such that each stage has to cope with a smaller uncertainty region in comparison to the overall parameter uncertainty. Analysis of multi stage algorithms followed by an extensive discussion of the 3GPP FDD downlink acquisition procedure as an example of a multi stage procedure with distributed information conclude the work. / Die zuverlässige Signalakquisition, die auch als Grobsynchronisation bezeichnet wird, stellt eine wichtige Aufgabe in DS-SS-Systemen dar. Das Ziel hierbei ist es, Schätzwerte fur die Übertragungsparameter derart zu bestimmen, dass die der Grobsynchronisation nachfolgende Feinsynchronisation initialisiert werden kann, d. h. dass die bestimmten Schätzwerte innerhalb des Fangbereiches der Feinsynchronisationsalgorithmen liegen. Insbesondere ist es für die Bestimmung von Synchronisationszeitpunkt und Frequenzversatz sinnvoll, eine Grobsynchronisation durchzuführen. Im Interesse einer begrenzten Komplexität sowie einer möglichst schnellen Akquisition finden vor allem datengestützte und vorwärtsverarbeitende Algorithmen Anwendung. Ausgehend vom Maximum-Likelihood-Kriterium (ML-Kriterium) können geeignete Schätzer für die gemeinsame Bestimmung von Synchronisationszeitpunkt und Frequenzversatz abgeleitet werden. Dabei ist das Maximum der Likelihood-Funktion innerhalb der Parameterunsicherheitsregion zu bestimmen. Aufgrund seiner hohen Komplexität ist der ML-Schatzer fur die Akquisition wenig geeignet; vielmehr müssen aufwandsgünstige Algorithmen mit ausreichender Leistungsfähigkeit gefunden werden. In dieser Arbeit werden verschiedene Algorithmen zur Parameterakquisition systematisierend gegenübergestellt. Weiterführend sind Verfahren zur Verbesserung des Akquisitionsverhaltens bezüglich Fehlalarm-Wahrscheinlichkeit und Akquisitionszeit unter Ausnutzung mehrfacher Beobachtung Gegenstand der Betrachtungen. Insbesondere optimale und suboptimale Verfahren mit fester Beobachtungsdauer sowie die sequentielle Auswertung aufeinander folgender Beobachtungen, bei der sich die Beobachtungsdauer nach der erreichten Entscheidungssicherheit bestimmt, werden analysiert. Als eine weitere Möglichkeit, die Signalakquisition in Bezug auf die Akquisitionszeit effizienter zu gestalten, werden mehrstufige Akquisitionsverfahren diskutiert. Es werden zum einen die häufig genutzten Mehrfach-Dwell-Algorithmen sowie mehrstufige Algorithmen mit verteilter Information betrachtet. Bei Letzteren Algorithmen wird jeder Akquisitionsstufe ein Teil der zur Synchronisation benötigten Information zugeordnet, wodurch sich die Parameter-Unsicherheit für jede einzelne Stufe verringert. Ziel hierbei ist es, durch Erhöhung der Entscheidungssicherheit der einzelnen Stufen die mittlere Akquisitionszeit zu reduzieren. Die Diskussion und die Analyse von mehrstufigen Akquisitionsverfahren bilden den Abschluss der Arbeit, wobei besonders auf die 3GPP-FDD Downlink-Akquisition als ein Beispiel fur mehrstufige Verfahren mit verteilter Information eingegangen wird.
|
77 |
Etude du bloc de réception dans un terminal UMTS-FDD et développement d'une méthodologie de codesign en vue du fonctionnement en temps réel.Batut, Eric 03 June 2002 (has links) (PDF)
L'UMTS est un nouveau standard de radiocommunications mobiles destiné à résoudre les problèmes des actuels réseaux de deuxième génération, proches localement de la saturation et limités dans leur offre de services multimédias par les faibles débits utiles supportés. L'UMTS représente une rupture technologique importante et nécessite un effort particulier pour la réa- lisation des équipements, car la complexité des traitements à effectuer a augmenté dans des proportions considérables. Les terminaux 3G, par exemple, devront embarquer une puissance de calcul supérieure de plus d'un ordre de grandeur à celle embarquée par leurs prédécesseurs. Après avoir introduit l'UMTS et une de ses interfaces radios, le Wideband CDMA, nous avons identifié l'estimation par le terminal du canal radiomobile par lequel a transité le signal émis par la station de base comme étant une des tâches susceptibles d'entraîner le plus grand nombre d'opérations à effectuer. Une solution originale à ce problème est proposée sous la forme d'un algorithme d'estimation itérative de canal à suppression de trajets. La complexité calculatoire de cet algorithme a l'inconvénient majeur de varier avec le carré du facteur de suréchantillonnage, ce qui empêche de travailler avec une valeur élevée de celui-ci, et par consé- quent ne permet pas d'obtenir une grande précision quant à l'estimation des instants d'arrivée des trajets. Ce problème est résolu en introduisant une version optimisée de cet algorithme, dont la complexité varie linéairement avec le facteur de suréchantillonnage. Conserver une com- plexité raisonnable tout en travaillant avec des facteurs de suréchantillonnage élevés devient réaliste, ce qui permet d'accéder à coût égal à une précision plus élevée qu'avec l'algorithme origininal. De plus, cette optimisation simplifie les opérations élémentaires effectuées par l'algo- rithme, ce qui a pour conséquence de rendre son implémentation sur une architecture hybride matérielle-logicielle plus efficace que son implémentation sur un seul processeur de signal. Une méthodologie de conception au niveau système est ensuite proposée pour réaliser cette architecture hybride dans un but de prototypage rapide. Cette méthodologie, bâtie autour du logiciel N2C, de la société CoWare, utilise un langage de haut niveau, surensemble du langage C auquel ont été rajoutées les constructions nécessaires pour décrire des architectures matérielles. L'algorithme est partitionné en une partie logicielle s'exécutant sur un cœur de DSP ST100 et un coprocesseur réalisé en logique câblée. De sévères incompatibilités logicielles ont empêché la réalisation de cette architecture hybride selon la méthodologie proposée, mais des résultats intéressants ont néanmoins été obtenus à partir d'une implémentation purement logicielle de l'algorithme proposé. L'architecture obtenue avec l'application des premières étapes de la méthodologie proposée à l'algorithme d'estimation de canal est décrite, ainsi que quelques suggestions faites à la société CoWare, Inc. pour l'amélioration de leur outil. Enfin, l'adéquation de la méthodologie proposée à un environnement de prototypage rapide est discutée et des pistes pour la réalisation d'un éventuel démonstrateur sont données.
|
78 |
Towards efficient legacy test evaluations at Ericsson AB, LinköpingSterneberg, Karl Gustav January 2008 (has links)
<p>3Gsim is a load generator for traffic simulation in a WCDMA (WidebandCode Division Multiple Access) network. It is developed at Ericsson AB inLinköping. Tests are run daily and the results are evaluated by testers. Whenerrors or abnormalities are found, the testers write trouble reports and thedescribed problems are handed over to designers whose task is to fix them.In order to save time, Ericsson wished to improve the efficiency.This study has focused on a specific part of the process of the developmentof 3Gsim, namely the process of evaluating test results. The goal has beento investigate if and how the process of evaluating 3Gsim test results can bemade more efficient.The daily work of the testers has been studied at close hand by the author.The testers answered a questionnaire with questions about their work andtheir opinions about the tools being used. The answers were evaluated andfocus was laid on the main problems.It was found that a lot of time is wasted on searching for trouble reports.A big part of the test result evaluation process consists of going throughsummary logs with error print-outs. Unfortunately no mapping betweenerror print-outs and trouble reports is performed. When going through thesummary logs the testers have to determine which errors have already beenreported and which ones that haven’t. Another major problem is the factthat most tests fail. On the webpage where the test results are displayed,this is indicated by a coloured field showing red. This is believed to have anegative effect on the work attitude.A lot of time can be saved by mapping error print-outs to trouble reportsand automatically comparing new error print-outs with old ones. The mappingwill also help preventing the creation of duplicated trouble reports. Thissolution will have the greatest impact on the improvement of the efficiency.Another way to enhance the efficiency is to develop a more advanced colourcoding scheme than the one used today. This coding scheme will help thetesters making the right priorities when processing the test results. Furthermore,these two solutions will have a positive effect on the work attitude. Aprototype implementing the first solution has been created. This prototypegives Ericsson AB the possibility to test the solution idea in practice.</p>
|
79 |
Direct Conversion RF Front-End Implementation for Ultra-Wideband (UWB) and GSM/WCDMA Dual-Band Applications in Silicon-Based TechnologiesPark, Yunseo 28 November 2005 (has links)
This dissertation focuses on wideband circuit design and implementation issues up to 10GHz based on the direct conversion architecture in the CMOS and SiGe BiCMOS technologies. The dissertation consists of two parts: One, implementation of a RF front-end receiver for an ultra-wideband system and, two, implementation of a local oscillation (LO) signal for a GSM/WCDMA multiband application. For emerging ultra-wideband (UWB) applications, the key active components in the RF front-end receiver were designed and implemented in 0.18um SiGe BiCMOS process. The design of LNA, which is the critical circuit block for both systems, was analyzed in terms of noise, linearity and group delay variation over an extemely wide bandwidth. Measurements are demonstrated for an energy-thrifty UWB receiver based on an MB-OFDM system covering the full FCC-allowed UWB frequency range.
For multiband applications such as a GSM/WCDMA dual-band application, the design of wideband VCO and various frequency generation blocks are investigated as alternatives for implementation of direct conversion architecture. In order to reduce DC-offset and LO pulling phenomena that degrade performance in a typical direct conversion scheme, an innovative fractional LO signal generator was implemented in a standard CMOS process. A simple analysis is provided for the loop dynamics and operating range of the design as well as for the measured results of the factional LO signal generator.
|
80 |
Predi??o de s?ries temporais de par?metros de redes HSPA - WCDMABezerra, Tiago dos Santos 10 June 2014 (has links)
Submitted by Automa??o e Estat?stica (sst@bczm.ufrn.br) on 2016-03-02T22:51:53Z
No. of bitstreams: 1
TiagoDosSantosBezerra_DISSERT.pdf: 2629795 bytes, checksum: 9ffd5ffe3a05204b53801a61891c5393 (MD5) / Approved for entry into archive by Arlan Eloi Leite Silva (eloihistoriador@yahoo.com.br) on 2016-03-04T00:16:01Z (GMT) No. of bitstreams: 1
TiagoDosSantosBezerra_DISSERT.pdf: 2629795 bytes, checksum: 9ffd5ffe3a05204b53801a61891c5393 (MD5) / Made available in DSpace on 2016-03-04T00:16:01Z (GMT). No. of bitstreams: 1
TiagoDosSantosBezerra_DISSERT.pdf: 2629795 bytes, checksum: 9ffd5ffe3a05204b53801a61891c5393 (MD5)
Previous issue date: 2014-06-10 / Com o crescimento da demanda de tr?fego de dados nas redes de terceira
gera??o (3G), as operadoras de telefonia m?vel t?m atentado para o direcionamento dos
recursos em infraestrutura nos locais onde se identifica maior necessidade. O
direcionamento desses investimentos tem o objetivo de manter a qualidade do servi?o
prestada principalmente em regi?es urbanas densas. Neste trabalho ? realizada a
predi??o de s?ries temporais em rede WCDMA - HSPA dos par?metros Rx Power,
RSCP (Received Signal Code Power), Ec/Io (Energy per chip/Interference) e taxa de
transmiss?o (throughput) na camada f?sica. A coleta dos valores dos par?metros foi
realizada numa rede em pleno funcionamento atrav?s de um drive test na cidade de
Natal ? RN, uma capital do nordeste brasileiro. Os modelos utilizados para predi??es
das s?ries temporais foram os de Alisamento Exponencial Simples, Holt, Holt-Winters
Aditivo e Holt-Winters Multiplicativo. O objetivo das predi??es das s?ries temporais ?
verificar qual modelo ir? gerar as melhores predi??es dos par?metros da rede WCDMA
- HSPA. / With the growing demand of data traffic in the networks of third generation
(3G), the mobile operators have attempted to focus resources on infrastructure in places
where it identifies a greater need. The channeling investments aim to maintain the
quality of service especially in dense urban areas. WCDMA - HSPA parameters Rx
Power, RSCP (Received Signal Code Power), Ec/Io (Energy per chip/Interference) and
transmission rate (throughput) at the physical layer are analyzed. In this work the
prediction of time series on HSPA network is performed. The collection of values of the
parameters was performed on a fully operational network through a drive test in Natal -
RN, a capital city of Brazil northeastern. The models used for prediction of time series
were the Simple Exponential Smoothing, Holt, Holt Winters Additive and Holt Winters
Multiplicative. The objective of the predictions of the series is to check which model
will generate the best predictions of network parameters WCDMA - HSPA.
|
Page generated in 0.0194 seconds