• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 46
  • 14
  • 10
  • 7
  • 4
  • 4
  • 3
  • 2
  • 2
  • 1
  • 1
  • Tagged with
  • 114
  • 42
  • 27
  • 23
  • 18
  • 18
  • 14
  • 14
  • 14
  • 14
  • 13
  • 12
  • 12
  • 12
  • 11
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
61

"Route Record Distance Vector Protocol for Minimization of Intra-Flow Interference"

Seibel, Roman 24 October 2013 (has links)
No description available.
62

Symbol Synchronization For Msk Signals Based On Matched Filtering

Sezginer, Serdar 01 January 2003 (has links) (PDF)
In this thesis, symbol timing recovery in MSK signals is investigated making use of matched filtering. A decision-directed symbol synchronizer cascaded with an MLSE receiver is proposed for fine timing. Correlation (matched filter) method is used to recover the timing epoch from the tentative decisions obtained from the Viterbi algorithm. The fractional delays are acquired using interpolation and an iterative maximum search process. In order to investigate the tracking performance of the proposed symbol synchronizer, a study is carried out on three possible optimum timing phase criteria: (i) Mazo criterion, (ii) the minimum squared ISI criterion (msISI), and (iii) the minimum BER criterion. Moreover, a discussion is given about the timing sensitivity of the MLSE receiver. The performance of the symbol synchronizer is assessed by computer simulations. It is observed that the proposed synchronizer tracks the variations of the channels almost the same as the msISI criterion. The proposed method eliminates the cycle slips very succesfully and is robust to frequency-selective multipath fading channel conditions even in moderate signal-to-noise ratios.
63

Kanalschätzung, Demodulation und Kanalcodierung in einem FPGA-basierten OFDM-Funkübertragungssystem

Tönder, Nico January 2007 (has links)
Zugl.: Hamburg, Techn. Univ., Diss., 2007
64

Módulo de Treliça Mínimo Para Códigos Convolucionais

BENCHIMOL, Isaac Benjamim 22 November 2012 (has links)
Submitted by Eduarda Figueiredo (eduarda.ffigueiredo@ufpe.br) on 2015-03-06T15:14:54Z No. of bitstreams: 2 Tese - Issac.pdf: 1699623 bytes, checksum: 0b927be3b372049f2ce08bca320becfc (MD5) license_rdf: 1232 bytes, checksum: 66e71c371cc565284e70f40736c94386 (MD5) / Made available in DSpace on 2015-03-06T15:14:54Z (GMT). No. of bitstreams: 2 Tese - Issac.pdf: 1699623 bytes, checksum: 0b927be3b372049f2ce08bca320becfc (MD5) license_rdf: 1232 bytes, checksum: 66e71c371cc565284e70f40736c94386 (MD5) Previous issue date: 2012-11-22 / FAPEAM / Esta tese apresenta uma medida de complexidade computacional para códigos convolucionais adequada para receptores que implementam o algoritmo de Viterbi em software. A definição desta complexidade envolve a determinação do número de operações aritméticas executadas em um módulo de treliça durante a decodificação, a implementação destas em uma arquitetura de processadores digitais de sinais e a avaliação do respectivo custo computacional de cada operação. Na sequência, esta medida é utilizada para avaliar o impacto do seccionamento do módulo de treliça mínimo. Um conjunto de regras é introduzido para construir padrões de seccionamento que resultem em estruturas de treliça mais compactas e regulares e de mesma complexidade da treliça mínima, constituindo uma alternativa de interesse em aplicações práticas. Finalmente, este trabalho apresenta um método para a construção do módulo de treliça mínimo para codificadores convolucionais sistemáticos recursivos adotados em esquemas turbo. Esta abordagem contribui para a redução da complexidade de decodificação de um decodificador turbo típico operando com codificadores constituintes de taxas altas. Uma busca de códigos é realizada e obtém-se um refinamento da relação complexidade de decodificação versus distância livre efetiva do código turbo.
65

Investigation of island geometry variations in bit patterned media storage systems

Shi, Yuanjing January 2011 (has links)
Bit-Patterned Media (BPM) has been recognised as one of the candidate technologies to achieve an areal density beyond 1Tb/in2 by fabricating single-domain islands out of continuous magnetic media. Though much attention has been focused on the fabrication of BPM, existing lithography techniques demonstrate difficulties in producing uniform islands over large areas cost effectively; the resulting fabricated islands often vary in position and size. The primary purpose of the research documented in this thesis is to investigate the issue of island geometry variations on the data recovery process from a perpendicular patterned media with head and media configurations optimised to achieve an areal density of 1Tb/in2. In order to achieve the research aim, a read channel model has been implemented as a platform to evaluate the read channel performance numerically. It can be also altered to investigate new read channel designs. The simulated results demonstrate that island geometry variations have a detrimental effect on read channel performance. It has shown that a BPM system can be tolerant to island position variations, but more effort needs to be paid to the effect that island size variations have on the read channel performance. A new read channel design revolving around the design of a modified trellis has been proposed for use in the Viterbi detector in order to combat the effect of island geometry variations. The modified trellis for island position variations results in extra states and branches compared to the standard trellis, while the modified trellis for island size variations results in only extra branches. The novel read channel designs demonstrate an improved read channel performance in the presence of island geometry variations even with increasing amounts of island position and size variations. There are two ways to obtain the read channel performance in terms of the bit-error-rate (BER): a) by running a numerical Monte-Carlo simulation to count the number of bits in error at the output of the read channel model and b) using an analytical approach to calculate the BER by approximating the noise into a known distribution. It is shown that both ways demonstrate very similar results, which indicates as long as the distribution of the noise present in read channel model is predictable, the analytical approach can evaluate the BER performance more efficiently, especially when the BER is low. However, the Monte-Carlo simulation is still useful for understanding of the correlation of the errors. Novel trellis proposed in this work will contribute to the commercial development of BPM in two ways: a) to improve the data recovery process in BPM systems, b) to allow a tolerance of 10% size variations for the existing fabrication techniques.
66

Object Tracking based on Eye Tracking Data : A comparison with a state-of-the-art video tracker

Ejnestrand, Ida, Jakobsson, Linnéa January 2020 (has links)
The process of locating moving objects through video sequences is a fundamental computer vision problem. This process is referred to as video tracking and has a broad range of applications. Even though video tracking is an open research topic that have received much attention during recent years, developing accurate and robust algorithms that can handle complicated tracking tasks and scenes is still challenging. One challenge in computer vision is to develop systems that like humans can understand, interpret and recognize visual information in different situations. In this master thesis work, a tracking algorithm based on eye tracking data is proposed. The aim was to compare the tracking performance of the proposed algorithm with a state-of-the-art video tracker. The algorithm was tested on gaze signals from five participants recorded with an eye tracker while the participants were exposed to dynamic stimuli. The stimuli were moving objects displayed on a stationary computer screen. The proposed algorithm is working offline meaning that all data is collected before analysis. The results show that the overall performance of the proposed eye tracking algorithm is comparable to the performance of a state-of-the-art video tracker. The main weaknesses are low accuracy for the proposed eye tracking algorithm and handling of occlusion for the video tracker. We also suggest a method for using eye tracking as a complement to object tracking methods. The results show that the eye tracker can be used in some situations to improve the tracking result of the video tracker. The proposed algorithm can be used to help the video tracker to redetect objects that have been occluded or for some other reason are not detected correctly. However, ATOM brings higher accuracy.
67

Effiziente Viterbi Decodierung und Anwendung auf die Bildübertragung in gestörten Kanälen

Röder, Martin 26 October 2017 (has links)
Faltungscodes ist der Viterbi Algorithmus, der aus einem empfangenen, codierten Datenblock die Daten ermittelt, die der Sender mit höchster Wahrscheinlichkeit gesendet hat. Auf dem Viterbi Algorithmus basieren die List Viterbi Algorithmen, die nicht nur die wahrscheinlichste Lösung, sondern eine Liste der n wahrscheinlichsten Lösungen (Pfade) finden. Im ersten Teil der Arbeit werden die aus der Literatur bekannten List Viterbi Algorithmen beschrieben, analysiert und hinsichtlich ihrer Komplexität verglichen. Es wird außerdem eine spezielle Implementation des Tree Trellis Algorithmusvorgeschlagen, durch die eine Komplexitätsreduzierung von quadratischer auf lineare Zeitkomplexität möglich ist. Der zweite Teil der Arbeit betrachtet die Anwendung von Faltungscodes auf die Bildübertragung. Es wird gezeigt, daß die durch die Reduzierung der Zeitkomplexität mögliche Erhöhung der Anzahl der bei der Decodierung betrachteten Pfade die Ergebnisse eines bestehenden Verfahrens zur Bildübertragung signifikant verbessert.
68

A High Throughput Low Power Soft-Output Viterbi Decoder

Ouyang, Gan January 2011 (has links)
A high-throughput low-power Soft-Output Viterbi decoder designed for the convolutional codes used in the ECMA-368 UWB standard is presented in this thesis. The ultra wide band (UWB) wireless communication technology is supposed to be used in physical layer of the wireless personal area network (WPAN) and next generation Blue Tooth. MB-OFDM is a very popular scheme to implement the UWB system and is adopted as the ECMA-368 standard. To make the high speed data transferred over the channel reappear reliably at the receiver, the error correcting codes (ECC) are wildly utilized in modern communication systems. The ECMA-368 standard uses concatenated convolutional codes and Reed-Solomon (RS) codes to encode the PLCP header and only convolutional codes to encode the PPDU Payload. The Viterbi algorithm (VA) is a popular method of decoding convolutional codes for its fairly low hardware implementation complexity and relatively good performance. Soft-Output Viterbi Algorithm (SOVA) proposed by J. Hagenauer in 1989 is a modified Viterbi Algorithm. A SOVA decoder can not only take in soft quantized samples but also provide soft outputs by estimating the reliability of the individual symbol decisions. These reliabilities can be provided to the subsequent decoder to improve the decoding performance of the concatenated decoder. The SOVA decoder is designed to decode the convolutional codes defined in the ECMA-368 standard. Its code rate and constraint length is R=1/3 and K=7 respectively. Additional code rates derived from the "mother" rate R=1/3 codes by employing "puncturing", including 1/2, 3/4, 5/8, can also be decoded. To speed up the add-compare-select unit (ACSU), which is always the speed bottleneck of the decoder, the modified CSA structure proposed by E.Yeo is adopted to replace the conventional ACS structure. Besides, the seven-level quantization instead of the traditional eight-level quantization is proposed to be used is in this decoder to speed up the ACSU in further and reduce its hardware implementation overhead. In the SOVA decoder, the delay line storing the path metric difference of every state contains the major portion of the overall required memory. A novel hybrid survivor path management architecture using the modified trace-forward method is proposed. It can reduce the overall required memory and achieve high throughput without consuming much power. In this thesis, we also give the way to optimize the other modules of the SOVA decoder. For example, the first K-1 necessary stages in the Path Comparison Unit (PCU) and Reliability Measurement Unit (RMU) are IX removed without affecting the decoding results. The attractiveness of SOVA decoder enables us to find a way to deliver its soft output to the RS decoder. We have to convert bit reliability into symbol reliability because the soft output of SOVA decoder is the bit-oriented while the reliability per byte is required by the RS decoder. But no optimum transformation strategy exists because the SOVA output is correlated. This thesis compare two kinds of the sub-optimum transformation strategy and proposes an easy to implement scheme to concatenate the SOVA decoder and RS decoder under various kinds of convolutional code rates. Simulation results show that, using this scheme, the concatenated SOVA-RS decoder can achieve about 0.35dB decoding performance gain compared to the conventional Viterbi-RS decoder.
69

Blind Acquisition of Short Burst with Per-Survivor Processing (PSP)

Mohammad, Maruf H. 13 December 2002 (has links)
This thesis investigates the use of Maximum Likelihood Sequence Estimation (MLSE) in the presence of unknown channel parameters. MLSE is a fundamental problem that is closely related to many modern research areas like Space-Time Coding, Overloaded Array Processing and Multi-User Detection. Per-Survivor Processing (PSP) is a technique for approximating MLSE for unknown channels by embedding channel estimation into the structure of the Viterbi Algorithm (VA). In the case of successful acquisition, the convergence rate of PSP is comparable to that of the pilot-aided RLS algorithm. However, the performance of PSP degrades when certain sequences are transmitted. In this thesis, the blind acquisition characteristics of PSP are discussed. The problematic sequences for any joint ML data and channel estimator are discussed from an analytic perspective. Based on the theory of indistinguishable sequences, modifications to conventional PSP are suggested that improve its acquisition performance significantly. The effect of tree search and list-based algorithms on PSP is also discussed. Proposed improvement techniques are compared for different channels. For higher order channels, complexity issues dominate the choice of algorithms, so PSP with state reduction techniques is considered. Typical misacquisition conditions, transients, and initialization issues are reported. / Master of Science
70

Generalization of Signal Point Target Code

Billah, Md Munibun 01 August 2019 (has links)
Detecting and correcting errors occurring in the transmitted data through a channel is a task of great importance in digital communication. In Error Correction Coding (ECC), some redundant data is added with the original data while transmitting. By exploiting the properties of the redundant data, the errors occurring in the data from the transmission can be detected and corrected. In this thesis, a new coding algorithm named Signal Point Target Code has been studied and various properties of the proposed code have been extended. Signal Point Target Code (SPTC) uses a predefined shape within a given signal constellation to generate a parity symbol. In this thesis, the relation between the employed shape and the performance of the proposed code have been studied and an extension of the SPTC are presented. This research presents simulation results to compare the performances of the proposed codes. The results have been simulated using different programming languages, and a comparison between those programming languages is provided. The performance of the codes are analyzed and possible future research areas have been indicated.

Page generated in 0.2767 seconds