• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 46
  • 14
  • 10
  • 7
  • 4
  • 4
  • 3
  • 2
  • 2
  • 1
  • 1
  • Tagged with
  • 114
  • 42
  • 27
  • 23
  • 18
  • 18
  • 14
  • 14
  • 14
  • 14
  • 13
  • 12
  • 12
  • 12
  • 11
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
71

SEQUENCE CLASSIFICATION USING HIDDEN MARKOV MODELS

DESAI, PRANAY A. 13 July 2005 (has links)
No description available.
72

An evaluation of coded wavelet for multicarrier modulation with OFDM

Anoh, Kelvin O.O., Ghazaany, Tahereh S., Hussaini, Abubakar S., Abd-Alhameed, Raed, Jones, Steven M.R., Rodriguez, Jonathan January 2013 (has links)
No / Orthogonal frequency division multiplexing (OFDM) is pronounced in wireless communication systems. Methods for improving the performance of the OFDM-based systems are mostly sought. A method of doing this involves error correction coding and another, a better multicarrier modulation kernel. In this work, convolutional error correction coding with interleaving is introduced in wavelet multicarrier modulation OFDM system (wavelet-OFDM) to improve the performance of multicarrier systems as the signal traverses the multipath and noisy transmission channels. This is compared with FFT-based multicarrier modulation (FFT-OFDM). Results show that coded wavelet-OFDM saves more than a half of the transmit power than the uncoded wavelet. Also it will be shown that, the interleaved and non-interleaved coded wavelet-OFDM well outperform interleaved coded and non-interleaved coded FFT-OFDM systems respectively.
73

Automatic Phoneme Recognition with Segmental Hidden Markov Models

Baghdasaryan, Areg Gagik 10 March 2010 (has links)
A speaker independent continuous speech phoneme recognition and segmentation system is presented. We discuss the training and recognition phases of the phoneme recognition system as well as a detailed description of the integrated elements. The Hidden Markov Model (HMM) based phoneme models are trained using the Baum-Welch re-estimation procedure. Recognition and segmentation of the phonemes in the continuous speech is performed by a Segmental Viterbi Search on a Segmental Ergodic HMM for the phoneme states. We describe in detail the three phases of the phoneme joint recognition and segmentation system. First, the extraction of the Mel-Frequency Cepstral Coefficients (MFCC) and the corresponding Delta and Delta Log Power coefficients is described. Second, we describe the operation of the Baum-Welch re-estimation procedure for the training of the phoneme HMM models, including the K-Means and the Expectation-Maximization (EM) clustering algorithms used for the initialization of the Baum-Welch algorithm. Additionally, we describe the structural framework of - and the recognition procedure for - the ergodic Segmental HMM for the phoneme segmentation and recognition. We include test and simulation results for each of the individual systems integrated into the phoneme recognition system and finally for the phoneme recognition/segmentation system as a whole. / Master of Science
74

Stochastic models for the estimation of the seismic hazard / Modèles stochastiques pour l'estimation du risque sismique

Pertsinidou, Christina Elisavet 03 March 2017 (has links)
Dans le premier chapitre, la notion d'évaluation des risques sismiques est définie et les caractéristiques sismotectoniques de la région d'étude sont brièvement présentés. Un examen rigoureux des modèles stochastiques, appliqués au domaine de la sismologie est fourni. Dans le chapitre 2, différents modèles semi-Markoviens sont développés pour étudier la sismicité des îles Ioniennes centrales ainsi que le Nord de la mer Egée (Grèce). Les quantités telles que le noyau semi-Markovien et les probabilités de destination sont évaluées, en considérant que les temps de séjour suivent les distributions géométrique, discrète Weibull et Pareto. Des résultats utiles sont obtenus pour l'estimation de la sismicité. Dans le troisième chapitre un nouvel algorithme de Viterbi pour les modèles semi-Markoviens cachés est construit, dont la complexité est une fonction linéaire du nombre d'observations et une fonction quadratique du nombre d'états cachés, la plus basse existante dans la littérature. Une extension de ce nouvel algorithme est développée pour le cas où une observation dépend de l'état caché correspondant, mais aussi de l'observation précédente (cas SM1-M1). Dans le chapitre 4 les modèles semi-Markoviens cachés sont appliquées pour étudier la sismicité du Nord et du Sud de la mer Égée. La séquence d'observation est constituée des magnitudes et des positions d’un tremblement de terre et le nouvel algorithme de Viterbi est mis en œuvre afin de décoder les niveaux des tensions cachés qui sont responsables pour la sismogenèse. Les phases précurseurs (variations des tensions cachées) ont été détectées en avertissant qu’un tremblement de terre pourrait se produire. Ce résultat est vérifié pour 70 sur 88 cas (le score optimal). Les temps de séjour du processus caché étaient supposés suivre les distributions Poisson, logarithmique ou binomiale négative, tandis que les niveaux de tensions cachés ont été classés en 2, 3 ou 4 états. Les modèles de Markov caché ont également été adaptés sans présenter des résultats intéressants concernant les phases précurseurs. Dans le chapitre 5 un algorithme de Viterbi généralisé pour les modèles semi-Markoviens cachés, est construit dans le sens que les transitions au même état caché sont autorisées et peuvent également être décodées. De plus, une extension de cet algorithme généralisé dans le contexte SM1-M1 est présentée. Dans le chapitre 6 nous modifions de manière convenable le modèle Cramér-Lundberg y compris des sinistres négatifs et positifs, afin de décrire l'évolution avec le temps des changements de contraintes de Coulomb (valeurs ΔCFF) calculées pour sept épicentres (M ≥ 6) du Nord de la mer Egée. Formules pour les probabilités de ruine sont définies sous une forme générale. Corollaires sont également formulés pour la distribution exponentielle et Pareto. L'objectif est de mettre en lumière la question suivante qui pose la problématique dans la Sismologie: Au cours d'une année pourquoi un tremblement de terre s’est produit dans une position précise et pas dans une autre position, aux régions sismotectoniquement homogènes ayant valeurs ΔCFF positives. Les résultats montrent que les nouvelles formules de probabilité peuvent contribuer à répondre au problème susmentionné. / In the first chapter the definition of the seismic hazard assessment is provided, the seismotectonic features of the study areas are briefly presented and the already existing mathematical models applied in the field of Seismology are thoroughly reviewed. In chapter 2, different semi-Markov models are developed for studying the seismicity of the areas of the central Ionian Islands and the North Aegean Sea (Greece). Quantities such as the kernel and the destination probabilities are evaluated, considering geometric, discrete-Weibull and Pareto distributed sojourn times. Useful results are obtained for forecasting purposes. In the third chapter a new Viterbi algorithm for hidden semi-Markov models is developed, whose complexity is a linear function of the number of observations and a quadratic function of the number of hidden states, the lowest existing in the literature. Furthermore, an extension of this new algorithm is introduced for the case that an observation depends on the corresponding hidden state but also on the previous observation (SM1-M1 case). In chapter 4, different hidden semi-Markov models (HSMMs) are applied for the study of the North and South Aegean Sea. The earthquake magnitudes and locations comprise the observation sequence and the new Viterbi algorithm is implemented in order to decode the hidden stress field associated with seismogenesis. Precursory phases (variations of the hidden stress field) were detected warning for an anticipated earthquake occurrence for 70 out of 88 cases (the optimal model’s score). The sojourn times of the hidden process were assumed to follow Poisson, logarithmic or negative binomial distributions, whereas the hidden stress levels were classified into 2, 3 or 4 states. HMMs were also adapted without presenting significant results as for the precursory phases. In chapter 5 a generalized Viterbi algorithm for HSMMs is constructed in the sense that now transitions to the same hidden state are allowed and can also be decoded. Furthermore, an extension of this generalized algorithm in the SM1-M1 context is given. In chapter 6 we modify adequately the Cramér-Lundberg model considering negative and positive claims, in order to describe the evolution in time of the Coulomb failure function changes (ΔCFF values) computed at the locations of seven strong (M ≥ 6) earthquakes of the North Aegean Sea. Ruin probability formulas are derived and proved in a general form. Corollaries are also formulated for the exponential and the Pareto distribution. The aim is to shed light to the following problem posed by the seismologists: During a specific year why did an earthquake occur at a specific location and not at another location in seismotectonically homogeneous areas with positive ΔCFF values (stress enhanced areas). The results demonstrate that the new probability formulas can contribute in answering the aforementioned question.
75

Kdy kdo mluví? / Speaker Diarization

Tomášek, Pavel January 2011 (has links)
This work aims at a task of speaker diarization. The goal is to implement a system which is able to decide "who spoke when". Particular components of implementation are described. The main parts are feature extraction, voice activity detection, speaker segmentation and clustering and finally also postprocessing. This work also contains results of implemented system on test data including a description of evaluation. The test data comes from the NIST RT Evaluation 2005 - 2007 and the lowest error rate for this dataset is 18.52% DER. Results are compared with diarization system implemented by Marijn Huijbregts from The Netherlands, who worked on the same data in 2009 and reached 12.91% DER.
76

Rate Flexible Soft Decision Viterbi Decoder using SiLago

Baliga, Naveen Bantwal January 2021 (has links)
The IEEE 802.11a protocol is part of the IEEE 802 protocols for implementing WLAN Wi- Fi computer communications in various frequencies. These protocols find applications worldwide, covering a wide range of devices like mobile phones, computers, laptops, household appliances, etc. Since wireless communication is being used, data that is transmitted is susceptible to noise. As a means to recover from noise, the data transmitted is encoded using convolutional encoding and correspondingly decoded on the receiver side. The decoder used is the Viterbi decoder, in the PHY layer of the protocol. This thesis investigates soft-decision Viterbi decoder implementations that meet the requirements of the IEEE 802.11a protocol. It aims to implement a rate-flexible design as a coarse grain re-configurable architecture using the SiLago framework. SiLago is a modular approach towards ASIC design. Components are designed as hardened blocks, which means they are synthesised and pre-verified. Each block is also abuttable like LEGO blocks, which allows users to connect compatible blocks and make designs specific to their requirements, while getting performance similar to that of traditional ASICs. This approach significantly reduces the design costs, as verification is a one-time task. The thesis discusses the strongly connected trellis Viterbi decoding algorithm and proposes a design for a soft decision Viterbi decoder. The proposed design meets the throughput requirements of the communication protocol and it can be reconfigured to work for 45 different code rates, with programmable soft decision width and parallelism. The algorithm used is compared against MATLAB for its BER performance. Results from RTL simulations, advantages and disadvantages of the proposed design are discussed. Recommendations for future improvements are also made. / IEEE 802.11a-protokollet är en del av IEEE 802-protokollen för att implementera WLAN Wi-Fi-datorkommunikation i olika frekvenser. Dessa protokoll används i applikationer över hela världen som täcker ett brett spektrum av produkter som mobiltelefoner, datorer, bärbara datorer, hushållsapparater etc. Eftersom trådlös kommunikation används är data som överförs känslig för brus. Som ett medel för att återhämta sig från brus kodas överförd data med hjälp av faltningskodning och avkodas på motsvarande sätt på mottagarsidan. Den avkodare som används är Viterbi-avkodaren, i PHY-skiktet i protokollet. Denna avhandling undersöker mjuka beslut Viterbi avkodarimplementeringar som uppfyller kraven i IEEE 802.11a protokollet. Det syftar till att implementera en hastighetsflexibel design som en grovkornig konfigurerbar arkitektur som använder SiLago ramverket. SiLago är ett modulärt synsätt på ASIC design. Komponenterna är utformade som härda block, vilket innebär att de är syntetiserade och förverifierade. Varje block kan också kopplas ihop, som LEGO block, vilket gör det möjligt för användare att ansluta kompatibla block och göra designer som är specifika för deras krav, samtidigt som de får prestanda som liknar traditionella ASICs. Detta tillvägagångssätt minskar designkostnaderna avsevärt, eftersom verifiering är en engångsuppgift. Avhandlingen diskuterar den starkt kopplade trellis Viterbi-avkodningsalgoritmen och föreslår en design för en mjuk Viterbi-avkodare. Den föreslagna designen uppfyller kommunikationsprotokollets genomströmningskrav och den kan konfigureras om för att fungera för 45 olika kodhastigheter, med programmerbar mjuk beslutsbredd och parallellitet. Algoritmen som används jämförs mot MATLAB för dess BER-prestanda. Resultat från RTL-simuleringar, fördelar och nackdelar med den föreslagna designen diskuteras. Rekommendationer för framtida förbättringar görs också.
77

Design and Use of a CCSDS - Compatible Data Unit Decoder

O'Donnell, John, Ramirez, Jose 10 1900 (has links)
International Telemetering Conference Proceedings / October 17-20, 1994 / Town & Country Hotel and Conference Center, San Diego, California / The Consultative Committee for Space Data Systems (CCSDS) formulates and publishes recommendations for space data system standards. CCSDS Recommendations define a layered data communications network along the lines of the OSI model. In the space link layer (OSI Data Link layer) fixed length blocks of CCSDS Packets are generated and multiplexed into the data field of Virtual Channel Data Units (VCDUs) in the Virtual Channel Access Sublayer. VCDUs with error correction coding become CVCDUs (coded VCDUs). CVCDUs (or VCDUs) with an attached sync marker become Channel Access Data Units (CADUs) which are transmitted on the Physical Space Channel. This paper discusses AYDIN's DEC012 Data Unit Decoder, a VMEbus circuit card which recovers Virtual Channel Data Units (VCDUs) from corrupted Channel Access Data Units (CADUs) received on the Space Link Subnet of a CCSDS-compatible space datacomm link. The module's design and operation is described along with its use in the X-ray Timing Explorer (XTE) and Tropical Rainfall Measuring Mission (TRMM) science satellite programs run by NASA Goddard Space Flight Center.
78

Rastreamento automático da bola de futebol em vídeos

Ilha, Gustavo January 2009 (has links)
A localização de objetos em uma imagem e acompanhamento de seu deslocamento numa sequência de imagens são tarefas de interesse teórico e prático. Aplicações de reconhecimento e rastreamento de padrões e objetos tem se difundido ultimamente, principalmente no ramo de controle, automação e vigilância. Esta dissertação apresenta um método eficaz para localizar e rastrear automaticamente objetos em vídeos. Para tanto, foi utilizado o caso do rastreamento da bola em vídeos esportivos, especificamente o jogo de futebol. O algoritmo primeiramente localiza a bola utilizando segmentação, eliminação e ponderação de candidatos, seguido do algoritmo de Viterbi, que decide qual desses candidatos representa efetivamente a bola. Depois de encontrada, a bola é rastreada utilizando o Filtro de Partículas auxiliado pelo método de semelhança de histogramas. Não é necessária inicialização da bola ou intervenção humana durante o algoritmo. Por fim, é feita uma comparação do Filtro de Kalman com o Filtro de Partículas no escopo do rastreamento da bola em vídeos de futebol. E, adicionalmente, é feita a comparação entre as funções de semelhança para serem utilizadas no Filtro de Partículas para o rastreamento da bola. Dificuldades, como a presença de ruído e de oclusão, tanto parcial como total, tiveram de ser contornadas. / The location of objects in an image and tracking its movement in a sequence of images is a task of theoretical and practical interest. Applications for recognition and tracking of patterns and objects have been spread lately, especially in the field of control, automation and vigilance. This dissertation presents an effective method to automatically locate and track objects in videos. Thereto, we used the case of tracking the ball in sports videos, specifically the game of football. The algorithm first locates the ball using segmentation, elimination and the weighting of candidates, followed by a Viterbi algorithm, which decides which of these candidates is actually the ball. Once found, the ball is tracked using the Particle Filter aided by the method of similarity of histograms. It is not necessary to initialize the ball or any human intervention during the algorithm. Next, a comparison of the Kalman Filter to Particle Filter in the scope of tracking the ball in soccer videos is made. And in addition, a comparison is made between the functions of similarity to be used in the Particle Filter for tracking the ball. Difficulties, such as the presence of noise and occlusion, in part or in total, had to be circumvented.
79

Text Augmentation: Inserting markup into natural language text with PPM Models

Yeates, Stuart Andrew January 2006 (has links)
This thesis describes a new optimisation and new heuristics for automatically marking up XML documents, and CEM, a Java implementation, using PPM models. CEM is significantly more general than previous systems, marking up large numbers of hierarchical tags, using n-gram models for large n and a variety of escape methods. Four corpora are discussed, including the bibliography corpus of 14682 bibliographies laid out in seven standard styles using the BibTeX system and marked up in XML with every field from the original BibTeX. Other corpora include the ROCLING Chinese text segmentation corpus, the Computists' Communique corpus and the Reuters' corpus. A detailed examination is presented of the methods of evaluating mark up algorithms, including computation complexity measures and correctness measures from the fields of information retrieval, string processing, machine learning and information theory. A new taxonomy of markup complexities is established and the properties of each taxon are examined in relation to the complexity of marked up documents. The performance of the new heuristics and optimisation are examined using the four corpora.
80

Evaluation of Word Length Effects on Multistandard Soft Decision Viterbi Decoding

Salim, Ahmed January 2011 (has links)
There have been proposals of many parity inducing techniques like Forward ErrorCorrection (FEC) which try to cope the problem of channel induced errors to alarge extent if not completely eradicate. The convolutional codes have been widelyidentified to be very efficient among the known channel coding techniques. Theprocess of decoding the convolutionally encoded data stream at the receiving nodecan be quite complex, time consuming and memory inefficient.This thesis outlines the implementation of multistandard soft decision viterbidecoder and word length effects on it. Classic Viterbi algorithm and its variantsoft decision viterbi algorithm, Zero-tail termination and Tail-Biting terminationfor the trellis are discussed. For the final implementation in C language, the "Zero-Tail Termination" approach with soft decision Viterbi decoding is adopted. Thismemory efficient implementation approach is flexible for any code rate and anyconstraint length.The results obtained are compared with MATLAB reference decoder. Simulationresults have been provided which show the performance of the decoderand reveal the interesting trade-off of finite word length with system performance.Such investigation can be very beneficial for the hardware design of communicationsystems. This is of high interest for Viterbi algorithm as convolutional codes havebeen selected in several famous standards like WiMAX, EDGE, IEEE 802.11a,GPRS, WCDMA, GSM, CDMA 2000 and 3GPP-LTE.

Page generated in 0.09 seconds