71 |
Rate Flexible Soft Decision Viterbi Decoder using SiLagoBaliga, Naveen Bantwal January 2021 (has links)
The IEEE 802.11a protocol is part of the IEEE 802 protocols for implementing WLAN Wi- Fi computer communications in various frequencies. These protocols find applications worldwide, covering a wide range of devices like mobile phones, computers, laptops, household appliances, etc. Since wireless communication is being used, data that is transmitted is susceptible to noise. As a means to recover from noise, the data transmitted is encoded using convolutional encoding and correspondingly decoded on the receiver side. The decoder used is the Viterbi decoder, in the PHY layer of the protocol. This thesis investigates soft-decision Viterbi decoder implementations that meet the requirements of the IEEE 802.11a protocol. It aims to implement a rate-flexible design as a coarse grain re-configurable architecture using the SiLago framework. SiLago is a modular approach towards ASIC design. Components are designed as hardened blocks, which means they are synthesised and pre-verified. Each block is also abuttable like LEGO blocks, which allows users to connect compatible blocks and make designs specific to their requirements, while getting performance similar to that of traditional ASICs. This approach significantly reduces the design costs, as verification is a one-time task. The thesis discusses the strongly connected trellis Viterbi decoding algorithm and proposes a design for a soft decision Viterbi decoder. The proposed design meets the throughput requirements of the communication protocol and it can be reconfigured to work for 45 different code rates, with programmable soft decision width and parallelism. The algorithm used is compared against MATLAB for its BER performance. Results from RTL simulations, advantages and disadvantages of the proposed design are discussed. Recommendations for future improvements are also made. / IEEE 802.11a-protokollet är en del av IEEE 802-protokollen för att implementera WLAN Wi-Fi-datorkommunikation i olika frekvenser. Dessa protokoll används i applikationer över hela världen som täcker ett brett spektrum av produkter som mobiltelefoner, datorer, bärbara datorer, hushållsapparater etc. Eftersom trådlös kommunikation används är data som överförs känslig för brus. Som ett medel för att återhämta sig från brus kodas överförd data med hjälp av faltningskodning och avkodas på motsvarande sätt på mottagarsidan. Den avkodare som används är Viterbi-avkodaren, i PHY-skiktet i protokollet. Denna avhandling undersöker mjuka beslut Viterbi avkodarimplementeringar som uppfyller kraven i IEEE 802.11a protokollet. Det syftar till att implementera en hastighetsflexibel design som en grovkornig konfigurerbar arkitektur som använder SiLago ramverket. SiLago är ett modulärt synsätt på ASIC design. Komponenterna är utformade som härda block, vilket innebär att de är syntetiserade och förverifierade. Varje block kan också kopplas ihop, som LEGO block, vilket gör det möjligt för användare att ansluta kompatibla block och göra designer som är specifika för deras krav, samtidigt som de får prestanda som liknar traditionella ASICs. Detta tillvägagångssätt minskar designkostnaderna avsevärt, eftersom verifiering är en engångsuppgift. Avhandlingen diskuterar den starkt kopplade trellis Viterbi-avkodningsalgoritmen och föreslår en design för en mjuk Viterbi-avkodare. Den föreslagna designen uppfyller kommunikationsprotokollets genomströmningskrav och den kan konfigureras om för att fungera för 45 olika kodhastigheter, med programmerbar mjuk beslutsbredd och parallellitet. Algoritmen som används jämförs mot MATLAB för dess BER-prestanda. Resultat från RTL-simuleringar, fördelar och nackdelar med den föreslagna designen diskuteras. Rekommendationer för framtida förbättringar görs också.
|
72 |
ON SYMBOL TIMING RECOVERY IN ALL-DIGITAL RECEIVERSGhrayeb, Ali A. 10 1900 (has links)
International Telemetering Conference Proceedings / October 27-30, 1997 / Riviera Hotel and Convention Center, Las Vegas, Nevada / Sandia National Laboratories (SNL) currently achieves a bandwidth efficiency (h ) of 0.5 to 1.0 bps/Hz by using traditional modulation schemes, such as, BPSK and QFSK. SNL has an interest in increasing the present bandwidth efficiency by a factor of 4 or higher with the same allocated bandwidth (about 10 MHz). Simulations have shown that 32- QAM trellis-coded modulation (TCM) gives a good bit error rate (BER) performance, and meets the requirements as far as the bandwidth efficiency is concerned. Critical to achieving this is that the receiver be able to achieve timing synchronization. This paper examines a particular timing recovery algorithm for all-digital receivers. Timing synchronization in a digital receiver can be achieved in different ways. One way of achieving this is by interpolating the original sampled sequence to produce another sampled sequence synchronized to the symbol rate or a multiple of the symbol rate. An adaptive sampling conversion algorithm which performs this function was developed by Floyd Gardner in 1993. In the present work, his algorithm was applied to two different modulation schemes, BPSK and 4-ary PAM. The two schemes were simulated in the presence of AWGN and ISI along with Gardner’s algorithm for timing recovery, and a fractionally spaced equalizer (T/2 FSE) for equalization. Simulations show that the algorithm gives good BER performance for BPSK in all the situations, and at different sampling frequencies, but unfortunately poor performance for the 4-ary PAM scheme. This indicates that Gardner’s algorithm for sampling conversion is not suitable for multi-level signaling schemes.
|
73 |
Energetic-lattice based optimization / L’optimization par trellis-énergetiqueKiran, Bangalore Ravi 31 October 2014 (has links)
La segmentation hiérarchique est une méthode pour produire des partitions qui représentent une même image de manière de moins en moins fine. En même temps, elle sert d'entrée à la recherche d'une partition optimale, qui combine des extraits des diverses partitions en divers endroits. Le traitement hiérarchique des images est un domaine émergent en vision par ordinateur, et en particulier dans la communauté qui étudie les images hyperspectrales et les SIG, du fait de son capacité à structurer des données hyper-dimensionnelles. Le chapitre 1 porte sur les deux concepts fondamentaux de tresse et de treillis énergétique. La tresse est une notion plus riche que celle de hiérarchie de partitions, en ce qu'elle incorpore, en plus, des partitions qui ne sont pas emboîtées les unes dans les autres, tout en s'appuyant globalement sur une hiérarchie. Le treillis énergétique est une structure mixte qui regroupe une tresse avec une énergie, et permet d'y définir des éléments maximaux et minimaux. Lorsqu'on se donne une énergie, trouver la partition formée de classes de la tresse (ou de la hiérarchie) qui minimise cette énergie est un problème insoluble, de par sa complexité combinatoriale. Nous donnons les deux conditions de h-croissance et de croissance d'échelle, qui garantissent l'existence, l'unicité et la monotonie des solutions, et conduisent à un algorithme qui les détermine en deux passes de lecture des données. Le chapitre 2 reste dans le cadre précédent, mais étudie plus spécifiquement l'optimisation sous contrainte. Il débouche sur trois généralisations du modèle Lagrangien. Le chapitre 3 applique l'optimisation par treillis énergétique au cas de figure où l'énergie est introduite par une « vérité terrain », c'est à dire par un jeu de dessins manuel, que les partitions optimales doivent serrer au plus près. Enfin, le chapitre 4 passe des treillis énergétiques à ceux des courbes de Jordan dans le plan euclidien, qui définissent un modèle continu de segmentations hiérarchiques. Il permet entre autres de composer les hiérarchies avec diverses fonctions numériques / Hierarchical segmentation has been a model which both identifies with the construct of extracting a tree structured model of the image, while also interpreting it as an optimization problem of the optimal scale selection. Hierarchical processing is an emerging field of problems in computer vision and hyper-spectral image processing community, on account of its ability to structure high-dimensional data. Chapter 1 discusses two important concepts of Braids and Energetic lattices. Braids of partitions is a richer hierarchical partition model that provides multiple locally non-nested partitioning, while being globally a hierarchical partitioning of the space. The problem of optimization on hierarchies and further braids are non-tractable due the combinatorial nature of the problem. We provide conditions, of h-increasingness, scale-increasingness on the energy defined on partitions, to extract unique and monotonically ordered minimal partitions. Furthermore these conditions are found to be coherent with the Braid structure to perform constrained optimization on hierarchies, and more generally Braids. Chapter 2 demonstrates the Energetic lattice, and how it generalizes the Lagrangian formulation of the constrained optimization problem on hierarchies. Finally in Chapter 3 we apply the method of optimization using energetic lattices to the problem of extraction of segmentations from a hierarchy, that are proximal to a ground truth set. Chapter 4 we show how one moves from the energetic lattice on hierarchies and braids, to a numerical lattice of Jordan Curves which define a continous model of hierarchical segmentation. This model enables also to compose different functions and hierarchies
|
74 |
Projeto e implementação de um novo algoritmo e protocolo de encaminhamento de pacotes baseado em códigos convolucionais usando TCNet: Trellis Coded Network. / Design and implementation of a new algorithm and packed forwarding protocol based on convolutional codes using TCNet: Trellis Coded Network.Lima Filho, Diogo Ferreira 24 February 2015 (has links)
Os Wireless sensor networks (WSNs) evoluíram a partir da idéia de que sensores sem fio podem ser utilizados para coletar informações de ambientes nas mais diversas situações. Os primeiros trabalhos sobre WSNs foram desenvolvidos pelo Defense Advanced Research Projects Agency (DARPA)1, com o conceito de Smart Dust baseados em microelectromechanical systems (MEMS), dispositivos com capacidades de detectar luminosidade, temperatura, vibração, magnetismo ou elementos químicos, com processamento embarcado e capaz de transmitir dados via wireless. Atualmente tecnologias emergentes têm aproveitado a possibilidade de comunicação com a World Wide Web para ampliar o rol de aplicações desta tecnologia, dentre elas a Internet das Coisas (Internet of Things) IoT. Esta pesquisa estuda a implementação de um novo algoritmo e protocolo que possibilita o encaminhamento dos dados coletados nos microsensores em cenários de redes ad hoc com os sensores distribuídos aleatoriamente, em uma área adversa. Apesar de terem sido desenvolvidos vários dispositivos de hardware pela comunidade de pesquisa sobre WSN, existe um esforço liderado pela Internet Engineering Task Force (IETF)2, na implementação e padronização de protocolos que atendam a estes mecanismos, com limitações de recursos em energia e processamento. Este trabalho propõe a implementação de novos algoritmos de encaminhamento de pacotes utilizando o conceito de códigos convolucionais. Os resultados obtidos por meio de extensivas simulações mostram ganhos em termos da redução de latência e do consumo de energia em relação ao protocolo AODV. A complexidade de implementação é extremamente baixa e compatível com os poucos recursos de hardware dos elementos que usualmente compõem uma rede de sensores sem fio (WSN). Na seção de trabalhos futuros é indicado um extenso conjunto de aplicações em que os conceitos desenvolvidos podem ser aplicados. / Wireless sensor networks (WSNs) have evolved from the idea that small wireless sensors can be used to collect information from the physical environment in a large number of situations. Early work in WSNs were developed by Defense Advanced Research Projects Agency (DARPA)1, so called Smart Dust, based on microelectromechanical systems (MEMS), devices able to detect light, temperature, vibration, magnetism or chemicals, with embedded processing and capable of transmitting wireless data. Currently emerging technologies have taken advantage of the possibility of communication with the World Wide Web to expand to all applications of this technology, among them the Internet of Things IoT. This research, studies to implement a new algorithm and protocol that allows routing of data collected in micro sensors in ad hoc networks scenarios with randomly distributed sensors in adverse areas. Although they were developed several hardware devices by the research community on WSN, there is an effort led by Internet Engineering Task Force (IETF)2, in the implementation and standardization of protocols that meet these mechanisms, with limited energy and processing resources. This work proposes the implementation of new packets forwarding algorithms using the concept of convolutional codes. The results obtained by means of extensive simulations show gains in terms of latency and energy consumption reduction compared to the AODV protocol. The implementation complexity is extremely low and compatible with the few hardware resources usually available in the elements of a wireless sensor network (WSN). In the future works section a large set of applications for which the developed concepts can be applied is indicated.
|
75 |
Projeto e implementação de um novo algoritmo e protocolo de encaminhamento de pacotes baseado em códigos convolucionais usando TCNet: Trellis Coded Network. / Design and implementation of a new algorithm and packed forwarding protocol based on convolutional codes using TCNet: Trellis Coded Network.Diogo Ferreira Lima Filho 24 February 2015 (has links)
Os Wireless sensor networks (WSNs) evoluíram a partir da idéia de que sensores sem fio podem ser utilizados para coletar informações de ambientes nas mais diversas situações. Os primeiros trabalhos sobre WSNs foram desenvolvidos pelo Defense Advanced Research Projects Agency (DARPA)1, com o conceito de Smart Dust baseados em microelectromechanical systems (MEMS), dispositivos com capacidades de detectar luminosidade, temperatura, vibração, magnetismo ou elementos químicos, com processamento embarcado e capaz de transmitir dados via wireless. Atualmente tecnologias emergentes têm aproveitado a possibilidade de comunicação com a World Wide Web para ampliar o rol de aplicações desta tecnologia, dentre elas a Internet das Coisas (Internet of Things) IoT. Esta pesquisa estuda a implementação de um novo algoritmo e protocolo que possibilita o encaminhamento dos dados coletados nos microsensores em cenários de redes ad hoc com os sensores distribuídos aleatoriamente, em uma área adversa. Apesar de terem sido desenvolvidos vários dispositivos de hardware pela comunidade de pesquisa sobre WSN, existe um esforço liderado pela Internet Engineering Task Force (IETF)2, na implementação e padronização de protocolos que atendam a estes mecanismos, com limitações de recursos em energia e processamento. Este trabalho propõe a implementação de novos algoritmos de encaminhamento de pacotes utilizando o conceito de códigos convolucionais. Os resultados obtidos por meio de extensivas simulações mostram ganhos em termos da redução de latência e do consumo de energia em relação ao protocolo AODV. A complexidade de implementação é extremamente baixa e compatível com os poucos recursos de hardware dos elementos que usualmente compõem uma rede de sensores sem fio (WSN). Na seção de trabalhos futuros é indicado um extenso conjunto de aplicações em que os conceitos desenvolvidos podem ser aplicados. / Wireless sensor networks (WSNs) have evolved from the idea that small wireless sensors can be used to collect information from the physical environment in a large number of situations. Early work in WSNs were developed by Defense Advanced Research Projects Agency (DARPA)1, so called Smart Dust, based on microelectromechanical systems (MEMS), devices able to detect light, temperature, vibration, magnetism or chemicals, with embedded processing and capable of transmitting wireless data. Currently emerging technologies have taken advantage of the possibility of communication with the World Wide Web to expand to all applications of this technology, among them the Internet of Things IoT. This research, studies to implement a new algorithm and protocol that allows routing of data collected in micro sensors in ad hoc networks scenarios with randomly distributed sensors in adverse areas. Although they were developed several hardware devices by the research community on WSN, there is an effort led by Internet Engineering Task Force (IETF)2, in the implementation and standardization of protocols that meet these mechanisms, with limited energy and processing resources. This work proposes the implementation of new packets forwarding algorithms using the concept of convolutional codes. The results obtained by means of extensive simulations show gains in terms of latency and energy consumption reduction compared to the AODV protocol. The implementation complexity is extremely low and compatible with the few hardware resources usually available in the elements of a wireless sensor network (WSN). In the future works section a large set of applications for which the developed concepts can be applied is indicated.
|
76 |
Channel Phase And Data Estimation In Slowly Fading Frequency Nonselective ChannelsZeydan, Engin 01 August 2006 (has links) (PDF)
In coherent receivers, the effect of the multipath fading channel on the transmitted signal must be estimated to recover the transmitted data. In this thesis, the channel
phase and data estimation problems are investigated in a transmitted data sequence when the channel is modeled as slowly fading, frequency non-selective channel.
Channel phase estimation in a transmitted data sequence is investigated and data estimation is obtained in a symbol-by-symbol MAP receiver that is designed for minimum symbol error probability criterion.
The channel phase is quantized in an interval of interest, the trellis diagram is constructed and Viterbi decoding algorithm is applied that uses the phase transition and observation models for channel phase estimation. The optimum coherent and noncoherent detectors for binary orthogonal and PSK signals are derived and the modulated signals in a sequence are detected in symbol-by-symbol MAP receivers.Simulation results have shown that the performance of the receiver with phase estimation is between the performance of the optimum coherent and noncoherent
receiver.
|
77 |
Chaotic Demodulation Under InterferenceErdem, Ozden 01 September 2006 (has links) (PDF)
Chaotically modulated signals are used in various engineering areas such as communication systems, signal processing applications, automatic control systems. Because chaotically modulated signal sequences are broadband and noise-like signals, they are used to carry binary signals especially in secure communication systems.
In this thesis, a target tracking problem under interference at chaotic communication systems is investigated. Simulating the chaotic communication system, noise-like signal sequences are generated to carry binary signals. These signal sequences are affected by Gaussian channel noise and interference while passing through the communication channel. At the receiver side, target tracking is performed using Optimum Decoding Based Smoothing Algorithm. The estimation performances of optimum decoding based smoothing algorithm at one dimensional chaotic systems and nonlinear chaotic algorithm map are presented and compared with the performance of the Extended Kalman Filter application.
|
78 |
Application Of Odsa To Population CalculationUlukaya, Mustafa 01 April 2006 (has links) (PDF)
In this thesis, Optimum Decoding-based Smoothing Algorithm (ODSA) is applied to well-known Discrete Lotka-Volterra Model. The performance of the algorithm is investigated for various parameters by simulations. Moreover, ODSA is compared with the SIR Particle Filter Algorithm. The advantages and disadvantages of the both algorithms are presented.
|
79 |
An MRF-Based Approach to Image and Video Resolution EnhancementVedadi, Farhang 10 1900 (has links)
<p>The main part of this thesis is concerned with detailed explanation of a newly proposed Markov random field-based de-interlacing algorithm. Previous works, assume a first or higher-order Markovian spatial inter-dependency between the pixel intensity values. In accord with the specific interpolation problem in hand, they try to approximate the Markov random field parameters using available original pixels. Then using the approximate model, they define an objective function such as energy function of the MRF to be optimized. The efficiency and accuracy of the optimization step is as important as the effectiveness of definition of the cost (objective function) as well as the MRF model.\\ \indent The major concept that distinguishes the newly proposed algorithm with the aforementioned MRF-based models is the definition of the MRF not over the intensity domain but over interpolator (interpolation method) domain. Unlike previous MRF-based models which try to estimate a two-dimensional array of pixel values, this new method estimates an MRF of interpolation function (interpolators) associated with the 2-D array of pixel intensity values.\\ \indent With some modifications, one can utilize the proposed model in different related fields such as image and video up-conversion, view interpolation and frame-rate up-conversion. To prove this potential of the proposed MRF-based model, we extend it to an image up-scaling algorithm. This algorithm uses a simplified version of the proposed MRF-based model for the purpose of image up-scaling by a factor of two in each spatial direction. Simulation results prove that the proposed model obtains competing performance results when applied in the two interpolation problems of video de-interlacing and image up-scaling.</p> / Master of Applied Science (MASc)
|
80 |
The Impact of Channel Estimation Error on Space-Time Block and Trellis Codes in Flat and Frequency Selective ChannelsChi, Xuan 22 July 2003 (has links)
Recently multiple antenna systems have received significant attention from researchers as a means to improve the energy and spectral efficiency of wireless systems. Among many classes of schemes, Space-Time Block codes (STBC) and Space-Time Trellis codes (STTC) have been the subject of many investigations.
Both techniques provide a means for combatting the effects of multipath fading without adding much complexity to the receiver. This is especially useful in the downlink of wireless systems. In this thesis we investigate the impact of channel estimation error on the performance of both STBC and STTC.
Channel estimation is especially important to consider in multiple antenna systems since (A) for coherent systems there are more channels to estimate due to multiple antennas and (B) the decoupling of data streams relies on correct channel estimation. The latter effect is due to the intentional cross-talk introduced into STBC. / Master of Science
|
Page generated in 0.0179 seconds