• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 46
  • 14
  • 10
  • 7
  • 4
  • 4
  • 3
  • 2
  • 2
  • 1
  • 1
  • Tagged with
  • 114
  • 42
  • 27
  • 23
  • 18
  • 18
  • 14
  • 14
  • 14
  • 14
  • 13
  • 12
  • 12
  • 12
  • 11
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
91

Simulation and Performance Evaluation of Algorithms for Unmanned Aircraft Conflict Detection and Resolution

Ledet, Jeffrey H 13 May 2016 (has links)
The problem of aircraft conflict detection and resolution (CDR) in uncertainty is addressed in this thesis. The main goal in CDR is to provide safety for the aircraft while minimizing their fuel consumption and flight delays. In reality, a high degree of uncertainty can exist in certain aircraft-aircraft encounters especially in cases where aircraft do not have the capabilities to communicate with each other. Through the use of a probabilistic approach and a multiple model (MM) trajectory information processing framework, this uncertainty can be effectively handled. For conflict detection, a randomized Monte Carlo (MC) algorithm is used to accurately detect conflicts, and, if a conflict is detected, a conflict resolution algorithm is run that utilizes a sequential list Viterbi algorithm. This thesis presents the MM CDR method and a comprehensive MC simulation and performance evaluation study that demonstrates its capabilities and efficiency.
92

Flexible Constraint Length Viterbi Decoders On Large Wire-area Interconnection Topologies

Garga, Ganesh 07 1900 (has links)
To achieve the goal of efficient ”anytime, anywhere” communication, it is essential to develop mobile devices which can efficiently support multiple wireless communication standards. Also, in order to efficiently accommodate the further evolution of these standards, it should be possible to modify/upgrade the operation of the mobile devices without having to recall previously deployed devices. This is achievable if as much functionality of the mobile device as possible is provided through software. A mobile device which fits this description is called a Software Defined Radio (SDR). Reconfigurable hardware-based solutions are an attractive option for realizing SDRs as they can potentially provide a favourable combination of the flexibility of a DSP or a GPP and the efficiency of an ASIC. The work presented in this thesis discusses the development of efficient reconfigurable hardware for one of the most energy-intensive functionalities in the mobile device, namely, Forward Error Correction (FEC). FEC is required in order to achieve reliable transfer of information at minimal transmit power levels. FEC is achieved by encoding the information in a process called channel coding. Previous studies have shown that the FEC unit accounts for around 40% of the total energy consumption of the mobile unit. In addition, modern wireless standards also place the additional requirement of flexibility on the FEC unit. Thus, the FEC unit of the mobile device represents a considerable amount of computing ability that needs to be accommodated into a very small power, area and energy budget. Two channel coding techniques have found widespread use in most modern wireless standards -namely convolutional coding and turbo coding. The Viterbi algorithm is most widely used for decoding convolutionally encoded sequences. It is possible to use this algorithm iteratively in order to decode turbo codes. Hence, this thesis specifically focusses on developing architectures for flexible Viterbi decoders. Chapter 2 provides a description of the Viterbi and turbo decoding techniques. The flexibility requirements placed on the Viterbi decoder by modern standards can be divided into two types -code rate flexibility and constraint length flexibility. The code rate dictates the number of received bits which are handled together as a symbol at the receiver. Hence, code rate flexibility needs to be built into the basic computing units which are used to implement the Viterbi algorithm. The constraint length dictates the number of computations required per received symbol as well as the manner of transfer of results between these computations. Hence, assuming that multiple processing units are used to perform the required computations, supporting constraint length flexibility necessitates changes in the interconnection network connecting the computing units. A constraint length K Viterbi decoder needs 2K−1computations to be performed per received symbol. The results of the computations are exchanged among the computing units in order to prepare for the next received symbol. The communication pattern according to which these results are exchanged forms a graph called a de Bruijn graph, with 2K−1nodes. This implies that providing constraint length flexibility requires being able to realize de Bruijn graphs of various sizes on the interconnection network connecting the processing units. This thesis focusses on providing constraint length flexibility in an efficient manner. Quite clearly, the topology employed for interconnecting the processing units has a huge effect on the efficiency with which multiple constraint lengths can be supported. This thesis aims to explore the usefulness of interconnection topologies similar to the de Bruijn graph, for building constraint length flexible Viterbi decoders. Five different topologies have been considered in this thesis, which can be discussed under two different headings, as done below: De Bruijn network-based architectures The interconnection network that is of chief interest in this thesis is the de Bruijn interconnection network itself, as it is identical to the communication pattern for a Viterbi decoder of a given constraint length. The problem of realizing flexible constraint length Viterbi decoders using a de Bruijn network has been approached in two different ways. The first is an embedding-theoretic approach where the problem of supporting multiple constraint lengths on a de Bruijn network is seen as a problem of embedding smaller sized de Bruijn graphs on a larger de Bruijn graph. Mathematical manipulations are presented to show that this embedding can generally be accomplished with a maximum dilation of, where N is the number of computing nodes in the physical network, while simultaneously avoiding any congestion of the physical links. In this case, however, the mapping of the decoder states onto the processing nodes is assumed fixed. Another scheme is derived based on a variable assignment of decoder states onto computing nodes, which turns out to be more efficient than the embedding-based approach. For this scheme, the maximum number of cycles per stage is found to be limited to 2 irrespective of the maximum contraint length to be supported. In addition, it is also found to be possible to execute multiple smaller decoders in parallel on the physical network, for smaller constraint lengths. Consequently, post logic-synthesis, this architecture is found to be more area-efficient than the architecture based on the embedding theoretic approach. It is also a more efficiently scalable architecture. Alternative architectures There are several interconnection topologies which are closely connected to the de Bruijn graph, and hence could form attractive alternatives for realizing flexbile constraint length Viterbi decoders. We consider two more topologies from this class -namely, the shuffle-exchange network and the flattened butterfly network. The variable state assignment scheme developed for the de Bruijn network is found to be directly applicable to the shuffle-exchange network. The average number of clock cycles per stage is found to be limited to 4 in this case. This is again independent of the constraint length to be supported. On the flattened butterfly (which is actually identical to the hypercube), a state scheduling scheme similar to that of bitonic sorting is used. This architecture is found to offer the ideal throughput of one decoded bit every clock cycle, for any constraint length. For comparison with a more general purpose topology, we consider a flexible constraint length Viterbi decoder architecture based on a 2D-mesh, which is a popular choice for general purpose applications, as well as many signal processing applications. The state scheduling scheme used here is also similar to that used for bitonic sorting on a mesh. All the alternative architectures are capable of executing multiple smaller decoders in parallel on the larger interconnection network. Inferences Following logic synthesis and power estimation, it is found that the de Bruijn network-based architecture with the variable state assignment scheme yields the lowest (area)−(time) product, while the flattened butterfly network-based architecture yields the lowest (area) - (time)2product. This means, that the de Bruijn network-based architecture is the best choice for moderate throughput applications, while the flattened butterfly network-based architecture is the best choice for high throughput applications. However, as the flattened butterfly network is less scalable in terms of size compared to the de Bruijn network, it can be concluded that among the architectures considered in this thesis, the de Bruijn network-based architecture with the variable state assignment scheme is overall an attractive choice for realizing flexible constraint length Viterbi decoders.
93

Code Aided Frame Synchronization For Frequency Selective Channels

Ekinci, Umut Utku 01 May 2010 (has links) (PDF)
Frame synchronization is an important problem in digital communication systems. In frame synchronization, the main task is to find the frame start given the flow of the communication symbols. In this thesis, frame synchronization problem is investigated for both additive white Gaussian noise (AWGN) channels and frequency selective channels. Most of the previous works on frame synchronization consider the simple case of AWGN channels. The algorithms developed for this purpose fail in frequency selective channels. There is limited number of algorithms proposed for the frequency selective channels. In this thesis, existing frame synchronization techniques are investigated for both AWGN and frequency selective channels. Code-aided frame synchronization techniques are combined with the methods for frequency selective channels. Mainly two types of code-aided frame synchronization schemes are considered and two new system structures are proposed for frame synchronization. One of the proposed structures performs better than the alternative methods for frequency selective channels. The overall system for this new synchronizer is composed of a list synchronizer which generates the possible frame starts, a channel estimator, a soft output MLSE equalizer, and a soft output Viterbi decoder. A mode separation algorithm is used to generate the statistics for the selection of the true frame start. Several experiments are done and the performance is outlined for a variety of scenarios.
94

Noncoherent Differential Demodulation Of Cpm Signals With Joint Frequency Offset And Symbol Timing Estimation

Culha, Onur 01 October 2011 (has links) (PDF)
In this thesis, noncoherent differential demodulation of CPM signals with joint carrier frequency offset and symbol timing estimation is investigated. CPM is very attractive for wireless communications owing to major properties: good spectral efficiency and a constant envelope property. In order to demodulate the received CPM signal differentially, the symbol timing and the carrier frequency offset have to be estimated accurately. There are numerous methods developed for the purpose. However, we have not encountered studies (which are based on autocorrelation estimation and hence suitable for blind synchronization) that give expectable performance for both M-ary and partial response signaling. Thus, in this thesis we analyze a feedforward blind estimation scheme, which recovers the symbol timing and the frequency offset of M-ary CPM signals and partial response CPM signals. In addition, we surveyed low complexity symbol detection methods for CPM signals. Reduced state Viterbi differential detector incorporated to the joint frequency offset and symbol timing estimator is also examined. The performance of the examined demodulator scheme is assessed for the AWGN channel by computer simulations.
95

Investigation on the Frequency Domain Channel Equalization and Interference Cancellation for Single Carrier Systems

Chan, Kuei-Cheng 11 August 2008 (has links)
In the single carrier systems with cyclic-prefix (CP), the use of CP does not only eliminate the inter-block interference (IBI), but also convert linear convolution of the transmitted signal with the channel into circular convolution, which leads to the computation complexity of the frequency domain equalization (FDE) at the receiver is reduced. Unfortunately, the use of CP considerably decreases the bandwidth utilization. In order to increase the bandwidth utilization, the single carrier systems with frequency domain equalization (SC-FDE) is investigated. When FDE is used in a single carrier system without CP, the IBI is induced by the modulated symbols and then the bit-error rate (BER) is increased. To reduce the interference and then improve the system performance, a novel interference cancellation scheme is proposed in this thesis. After FDE, it is shown that interference is induced from the right end of a time domain signal block and most of the interference is located at both ends of an equalized time domain signal block. Based on this observation, the modulated symbols which induce the interference are detected according to the maximum-likelihood (ML) principle and then the interference is regenerated and eliminated. For simplifying the computation complexity, we further propose a successive interference cancellation scheme, which is implemented by using the Viterbi algorithm. The simulation results demonstrate that the proposed scheme improves BER performance significantly in SC-FDE systems. In addition, the proposed architecture has comparable BER performance with the SC-CP systems when the multi-path channel is exponentially decayed.
96

Αρχιτεκτονικές υλικού χαμηλής ισχύος για την αποκωδικοποίηση συνελικτικών κωδίκων σε ασύρματα modems

Γκρίμπας, Δημήτρης 26 October 2007 (has links)
Στα πλαίσια της διπλωματικής εργασίας μελετήθηκε μια κατηγορία αλγορίθμων διόρθωσης λαθών που προκύπτουν κατά τη μετάδοση δεδομένων μέσα από ένα ασύρματο τηλεπικοινωνιακό κανάλι. Η μετάδοση των δεδομένων έγινε χρησιμοποιώντας τις διαμορφώσεις BPSK, QPSK, 16 – QAM και 64 – QAM. Η μελέτη επικεντρώθηκε στην περίπτωση της συνελικτικής κωδικοποίησης δεδομένων. Για την υλοποίηση του αποκωδικοποιητή (decoder) μελετήθηκαν συγκριτικά οι αλγόριθμοι Viterbi και SOVA καθώς και οι αντίστοιχες αρχιτεκτονικές υλοποίησης τους σε υλικό, ως προς την πολυπλοκότητα, την κατανάλωση και την ταχύτητά τους για συγκεκριμένη ικανότητα διόρθωσης λαθών που μετράται ως μείωση του BER. Επίσης, μελετήθηκαν τέσσερεις διαφορετικοί τρόποι αποδιαμόρφωσης και αποκωδικοποίησης των δεδομένων για διαμορφώσεις QAM βασισμένοι στον αλγόριθμο Viterbi. Η μεθοδολογία της διπλωματικής περιέλαβε την υλοποίηση ενός πλήρους μοντέλου τηλεπικοινωνιακού συστήματος με μη ιδανικό κανάλι, AWGN, στο οποίο προστέθηκαν μηχανισμοί διόρθωσης λάθους. Η μελέτη έλαβε υπόψη τον κβαντισμό στο δέκτη στην αναπαράσταση δεδομένων καθώς και στα ενδιάμεσα αποτελέσματα. Αξιολογήθηκαν τρόποι κβαντισμού συναρτήσει παραμέτρων του καναλιού, και εντοπίστηκαν τα ελάχιστα αναγκαία μήκη λέξης για την υλοποίηση των αλγορίθμων του δέκτη, λαμβάνοντας υπόψη το trade-off μεταξύ απόδοσης και κόστους υλοποίησης σε υλικό. Με τη χρήση bit-true εξομοιώσεων μελετήθηκαν τρόποι ελαχιστοποίησης της δυναμικής περιοχής που απαιτείται για την αναπαράσταση των ενδιάμεσων μετρικών. Σε κάθε περίπτωση αναλύθηκε η απόδοση των αλγορίθμων με βάση το ποσοστό των λαθών στο δέκτη (BER) ενώ συνεκτιμήθηκε η πολυπλοκότητα της αντίστοιχης υλοποίησης VLSI. / This thesis focuses on a class of algorithms for the correction of errors due of the transmission of data through a wireless telecommunications channel. The modulations employed are BPSK, QPSK, 16-QAM and 64-QAM. The study focuses on convolutional coding. The performance of solutions based on Viterbi and SOVA algorithms are comparatively studied, as well as the corresponding hardware architectures, in terms of the complexity, consumption and speed, while specifications are set in terms of error correction capability, measured in BER. Also, four different ways of combined demodulation and decoding of QAM data are studied based on the Viterbi algorithm. The methodology assumed in this thesis includes the realization of a complete telecommunications system model assuming an additive white gaussian noise channel, AWGN, in which mechanisms of error correction are added. The study takes into consideration the quantization effects in the receiver and in all the intermediary operations of algorithms. It has been found that the ideal quantizer for the receiver is related to channel parameters. In addition the shortest necessary word lengths were identified taking into consideration trade off between output and hardware realization cost. By means of bit-true simulations, ways of minimization of dynamic region, required for the representation intermediary metrics were studied. In every case the performance of algorithms is analyzed in terms of BER, as well as computational cost and impact on VLSI realization.
97

Unsupervised and semi-supervised training methods for eukaryotic gene prediction

Ter-Hovhannisyan, Vardges 17 November 2008 (has links)
This thesis describes new gene finding methods for eukaryotic gene prediction. The current methods for deriving model parameters for gene prediction algorithms are based on curated or experimentally validated set of genes or gene elements. These training sets often require time and additional expert efforts especially for the species that are in the initial stages of genome sequencing. Unsupervised training allows determination of model parameters from anonymous genomic sequence with. The importance and the practical applicability of the unsupervised training is critical for ever growing rate of eukaryotic genome sequencing. Three distinct training procedures are developed for diverse group of eukaryotic species. GeneMark-ES is developed for species with strong donor and acceptor site signals such as Arabidopsis thaliana, Caenorhabditis elegans and Drosophila melanogaster. The second version of the algorithm, GeneMark-ES-2, introduces enhanced intron model to better describe the gene structure of fungal species with posses with relatively weak donor and acceptor splice sites and well conserved branch point signal. GeneMark-LE, semi-supervised training approach is designed for eukaryotic species with small number of introns. The results indicate that the developed unsupervised training methods perform well as compared to other training methods and as estimated from the set of genes supported by EST-to-genome alignments. Analysis of novel genomes reveals interesting biological findings and show that several candidates of under-annotated and over-annotated fungal species are present in the current set of annotated of fungal genomes.
98

Performance comparison of two implementations of TCM for QAM

Peh, Lin Kiat 12 1900 (has links)
Approved for public release; distribution is unlimited. / Trellis-Coded Modulation (TCM) is employed with quadrature amplitude modulation (QAM) to provide error correction coding with no expense in bandwidth. There are two common implementations of TCM, namely pragmatic TCM and Ungerboeck TCM. Both schemes employ Viterbi algorithms for decoding but have different code construction. This thesis investigates and compares the performance of pragmatic TCM and Ungerboeck TCM by implementing the Viterbi decoding algorithm for both schemes with 16-QAM and 64-QAM. Both pragmatic and Ungerboeck TCM with six memory elements are considered. Simulations were carried out for both pragmatic and Ungerboeck TCM to evaluate their respective performance. The simulations were done using Matlab software, and an additive white Gaussian noise channel was assumed. The objective was to ascertain that pragmatic TCM, with its reduced-complexity decoding, is more suitable to adaptive modulation than Ungerboeck TCM. / Civilian
99

Correlation attacks on stream ciphers using convolutional codes

Bruwer, Christian S 24 January 2006 (has links)
This dissertation investigates four methods for attacking stream ciphers that are based on nonlinear combining generators: -- Two exhaustive-search correlation attacks, based on the binary derivative and the Lempel-Ziv complexity measure. -- A fast-correlation attack utilizing the Viterbi algorithm -- A decimation attack, that can be combined with any of the above three attacks. These are ciphertext-only attacks that exploit the correlation that occurs between the ciphertext and an internal linear feedback shift-register (LFSR) of a stream cipher. This leads to a so-called divide and conquer attack that is able to reconstruct the secret initial states of all the internal LFSRs within the stream cipher. The binary derivative attack and the Lempel-Ziv attack apply an exhaustive search to find the secret key that is used to initialize the LFSRs. The binary derivative and the Lempel-Ziv complexity measures are used to discriminate between correct and incorrect solutions, in order to identify the secret key. Both attacks are ideal for implementation on parallel processors. Experimental results show that the Lempel-Ziv correlation attack gives successful results for correlation levels of p = 0.482, requiring approximately 62000 ciphertext bits. And the binary derivative attack is successful for correlation levels of p = 0.47, using approximately 24500 ciphertext bits. The fast-correlation attack, utilizing the Viterbi algorithm, applies principles from convolutional coding theory, to identify an embedded low-rate convolutional code in the pn-sequence that is generated by an internal LFSR. The embedded convolutional code can then be decoded with a low complexity Viterbi algorithm. The algorithm operates in two phases: In the first phase a set of suitable parity check equations is found, based on the feedback taps of the LFSR, which has to be done once only once for a targeted system. In the second phase these parity check equations are utilized in a Viterbi decoding algorithm to recover the transmitted pn-sequence, thereby obtaining the secret initial state of the LFSR. Simulation results for a 19-bit LFSR show that this attack can recover the secret key for correlation levels of p = 0.485, requiring an average of only 153,448 ciphertext bits. All three attacks investigated in this dissertation are capable of attacking LFSRs with a length of approximately 40 bits. However, these attacks can be extended to attack much longer LFSRs by making use of a decimation attack. The decimation attack is able to reduce (decimate) the size of a targeted LFSR, and can be combined with any of the three above correlation attacks, to attack LFSRs with a length much longer than 40 bits. / Dissertation (MEng (Electronic Engineering))--University of Pretoria, 2007. / Electrical, Electronic and Computer Engineering / unrestricted
100

Interactive Transcription of Old Text Documents

Serrano Martínez-Santos, Nicolás 09 June 2014 (has links)
Nowadays, there are huge collections of handwritten text documents in libraries all over the world. The high demand for these resources has led to the creation of digital libraries in order to facilitate the preservation and provide electronic access to these documents. However text transcription of these documents im- ages are not always available to allow users to quickly search information, or computers to process the information, search patterns or draw out statistics. The problem is that manual transcription of these documents is an expensive task from both economical and time viewpoints. This thesis presents a novel ap- proach for e cient Computer Assisted Transcription (CAT) of handwritten text documents using state-of-the-art Handwriting Text Recognition (HTR) systems. The objective of CAT approaches is to e ciently complete a transcription task through human-machine collaboration, as the e ort required to generate a manual transcription is high, and automatically generated transcriptions from state-of-the-art systems still do not reach the accuracy required. This thesis is centered on a special application of CAT, that is, the transcription of old text document when the quantity of user e ort available is limited, and thus, the entire document cannot be revised. In this approach, the objective is to generate the best possible transcription by means of the user e ort available. This thesis provides a comprehensive view of the CAT process from feature extraction to user interaction. First, a statistical approach to generalise interactive transcription is pro- posed. As its direct application is unfeasible, some assumptions are made to apply it to two di erent tasks. First, on the interactive transcription of hand- written text documents, and next, on the interactive detection of the document layout. Next, the digitisation and annotation process of two real old text documents is described. This process was carried out because of the scarcity of similar resources and the need of annotated data to thoroughly test all the developed tools and techniques in this thesis. These two documents were carefully selected to represent the general di culties that are encountered when dealing with HTR. Baseline results are presented on these two documents to settle down a benchmark with a standard HTR system. Finally, these annotated documents were made freely available to the community. It must be noted that, all the techniques and methods developed in this thesis have been assessed on these two real old text documents. Then, a CAT approach for HTR when user e ort is limited is studied and extensively tested. The ultimate goal of applying CAT is achieved by putting together three processes. Given a recognised transcription from an HTR system. The rst process consists in locating (possibly) incorrect words and employs the user e ort available to supervise them (if necessary). As most words are not expected to be supervised due to the limited user e ort available, only a few are selected to be revised. The system presents to the user a small subset of these words according to an estimation of their correctness, or to be more precise, according to their con dence level. Next, the second process starts once these low con dence words have been supervised. This process updates the recogni- tion of the document taking user corrections into consideration, which improves the quality of those words that were not revised by the user. Finally, the last process adapts the system from the partially revised (and possibly not perfect) transcription obtained so far. In this adaptation, the system intelligently selects the correct words of the transcription. As results, the adapted system will bet- ter recognise future transcriptions. Transcription experiments using this CAT approach show that this approach is mostly e ective when user e ort is low. The last contribution of this thesis is a method for balancing the nal tran- scription quality and the supervision e ort applied using our previously de- scribed CAT approach. In other words, this method allows the user to control the amount of errors in the transcriptions obtained from a CAT approach. The motivation of this method is to let users decide on the nal quality of the desired documents, as partially erroneous transcriptions can be su cient to convey the meaning, and the user e ort required to transcribe them might be signi cantly lower when compared to obtaining a totally manual transcription. Consequently, the system estimates the minimum user e ort required to reach the amount of error de ned by the user. Error estimation is performed by computing sepa- rately the error produced by each recognised word, and thus, asking the user to only revise the ones in which most errors occur. Additionally, an interactive prototype is presented, which integrates most of the interactive techniques presented in this thesis. This prototype has been developed to be used by palaeographic expert, who do not have any background in HTR technologies. After a slight ne tuning by a HTR expert, the prototype lets the transcribers to manually annotate the document or employ the CAT ap- proach presented. All automatic operations, such as recognition, are performed in background, detaching the transcriber from the details of the system. The prototype was assessed by an expert transcriber and showed to be adequate and e cient for its purpose. The prototype is freely available under a GNU Public Licence (GPL). / Serrano Martínez-Santos, N. (2014). Interactive Transcription of Old Text Documents [Tesis doctoral no publicada]. Universitat Politècnica de València. https://doi.org/10.4995/Thesis/10251/37979 / TESIS

Page generated in 0.0697 seconds