Spelling suggestions: "subject:"hidden markov model"" "subject:"hidden darkov model""
141 |
Statistical Analysis of Wireless Communication Systems Using Hidden Markov ModelsRouf, Ishtiaq 06 August 2009 (has links)
This thesis analyzes the use of hidden Markov models (HMM) in wireless communication systems. HMMs are a probabilistic method which is useful for discrete channel modeling. The simulations done in the thesis verified a previously formulated methodology. Power delay profiles (PDP) of twelve wireless receivers were used for the experiment. To reduce the computational burden, binary HMMs were used. The PDP measurements were sampled to identify static receivers and grid-based analysis. This work is significant as it has been performed in a new environment.
Stochastic game theory is analyzed to gain insight into the decision-making process of HMMs. Study of game theory is significant because it analyzes rational decisions in detail by attaching risk and reward to every possibility.
Network security situation awareness has emerged as a novel application of HMMs in wireless networking. The dually stochastic nature of HMMs is applied in this process for behavioral analysis of network intrusion. The similarity of HMMs to artificial neural networks makes it useful for such applications. This application was performed using simulations similar to the original works. / Master of Science
|
142 |
Machine Learning Techniques for Gesture RecognitionCaceres, Carlos Antonio 13 October 2014 (has links)
Classification of human movement is a large field of interest to Human-Machine Interface researchers. The reason for this lies in the large emphasis humans place on gestures while communicating with each other and while interacting with machines. Such gestures can be digitized in a number of ways, including both passive methods, such as cameras, and active methods, such as wearable sensors. While passive methods might be the ideal, they are not always feasible, especially when dealing in unstructured environments. Instead, wearable sensors have gained interest as a method of gesture classification, especially in the upper limbs. Lower arm movements are made up of a combination of multiple electrical signals known as Motor Unit Action Potentials (MUAPs). These signals can be recorded from surface electrodes placed on the surface of the skin, and used for prosthetic control, sign language recognition, human machine interface, and a myriad of other applications.
In order to move a step closer to these goal applications, this thesis compares three different machine learning tools, which include Hidden Markov Models (HMMs), Support Vector Machines (SVMs), and Dynamic Time Warping (DTW), to recognize a number of different gestures classes. It further contrasts the applicability of these tools to noisy data in the form of the Ninapro dataset, a benchmarking tool put forth by a conglomerate of universities. Using this dataset as a basis, this work paves a path for the analysis required to optimize each of the three classifiers. Ultimately, care is taken to compare the three classifiers for their utility against noisy data, and a comparison is made against classification results put forth by other researchers in the field.
The outcome of this work is 90+ % recognition of individual gestures from the Ninapro dataset whilst using two of the three distinct classifiers. Comparison against previous works by other researchers shows these results to outperform all other thus far. Through further work with these tools, an end user might control a robotic or prosthetic arm, or translate sign language, or perhaps simply interact with a computer. / Master of Science
|
143 |
Continuous HMM connected digit recognitionPadmanabhan, Ananth 31 January 2009 (has links)
In this thesis we develop a system for recognition of strings of connected digits that can be used in a hands-free telephone system. We present a detailed description of the elements of the recognition system, such as an endpoint algorithm, the extraction of feature vectors from the speech samples, and the practical issues involved in training and recognition, in a Hidden Markov Model (HMM) based speech recognition system.
We use continuous mixture densities to approximate the observation probability density functions (pdfs) in the HMM. While more complex in implementation, continuous (observation) HMMs provide superior performance to the discrete (observation) HMMs.
Due to the nature of the application, ours is a speaker dependent recognition system and we have used a single speaker's speech to train and test our system. From the experimental evaluation of the effects of various model sizes on recognition performance, we observed that the use of HMMs with 7 states and 4 mixture density components yields average recognition rates better than 99% on the isolated digits. The level-building algorithm was used with the isolated digit models, which produced a recognition rate of better than 90% for 2-digit strings. For 3 and 4-digit strings, the performance was 83 and 64% respectively. These string recognition rates are much lower than expected for concatenation of single digits. This is most likely due to uncertainties in the location of the concatenated digits, which increases disproportionately with an increase in the number of digits in the string. / Master of Science
|
144 |
Modeling Financial Volatility Regimes with Machine Learning through Hidden Markov ModelsNordhäger, Tobias, Ankarbåge, Per January 2024 (has links)
This thesis investigates the application of Hidden Markov Models (HMMs) to model financial volatility-regimes and presents a parameter learning approach using real-world data. Although HMMs as regime-switching models are established, empirical studies regarding the parameter estimation of such models remain limited. We address this issue by creating a systematic approach (algorithm) for parameter learning using Python programming and the hmmlearn library. The algorithm works by initializing a wide range of random parameter values for an HMM and maximizing the log-likelihood of an observation sequence, obtained from market data, using expectation-maximization; the optimal number of volatility regimes for the HMM is determined using information criterion. By training models on historical market and volatility index data, we found that a discrete model is favored for volatility modeling and option pricing due to its low complexity and high customizability, and a Gaussian model is favored for asset allocation and price simulation due to its ability to model market regimes. However, practical applications of these models were not researched, and thus, require further studies to test and calibrate.
|
145 |
Detection and Classification of Heart Sounds Using a Heart-Mobile InterfaceThiyagaraja, Shanti 12 1900 (has links)
An early detection of heart disease can save lives, caution individuals and also help to determine the type of treatment to be given to the patients. The first test of diagnosing a heart disease is through auscultation - listening to the heart sounds. The interpretation of heart sounds is subjective and requires a professional skill to identify the abnormalities in these sounds. A medical practitioner uses a stethoscope to perform an initial screening by listening for irregular sounds from the patient's chest. Later, echocardiography and electrocardiography tests are taken for further diagnosis. However, these tests are expensive and require specialized technicians to operate. A simple and economical way is vital for monitoring in homecare or rural hospitals and urban clinics. This dissertation is focused on developing a patient-centered device for initial screening of the heart sounds that is both low cost and can be used by the users on themselves, and later share the readings with the healthcare providers. An innovative mobile health service platform is created for analyzing and classifying heart sounds. Certain properties of heart sounds have to be evaluated to identify the irregularities such as the number of heart beats and gallops, intensity, frequency, and duration. Since heart sounds are generated in low frequencies, human ears tend to miss certain sounds as the high frequency sounds mask the lower ones. Therefore, this dissertation provides a solution to process the heart sounds using several signal processing techniques, identifies the features in the heart sounds and finally classifies them. This dissertation enables remote patient monitoring through the integration of advanced wireless communications and a customized low-cost stethoscope. It also permits remote management of patients' cardiac status while maximizing patient mobility. The smartphone application facilities recording, processing, visualizing, listening, and classifying heart sounds. The application also generates an electronic medical record, which is encrypted using the efficient elliptic curve cryptography and sent to the cloud, facilitating access to physicians for further analysis. Thus, this dissertation results in a patient-centered device that is essential for initial screening of the heart sounds, and could be shared for further diagnosis with the medical care practitioners.
|
146 |
A Multi-Target Graph-Constrained HMM Localisation Approach using Sparse Wi-Fi Sensor Data / Graf-baserad HMM Lokalisering med Wi-Fi Sensordata av GångtrafikanterDanielsson, Simon, Flygare, Jakob January 2018 (has links)
This thesis explored the possibilities of using a Hidden Markov Model approach for multi-target localisation in an urban environment, with observations generated from Wi-Fi sensors. The area is modelled as a network of nodes and arcs, where the arcs represent sidewalks in the area and constitutes the hidden states in the model. The output of the model is the expected amount of people at each road segment throughout the day. In addition to this, two methods for analyzing the impact of events in the area are proposed. The first method is based on a time series analysis, and the second one is based on the updated transition matrix using the Baum-Welch algorithm. Both methods reveal which road segments are most heavily affected by a surge of traffic in the area, as well as potential bottleneck areas where congestion is likely to have occurred. / I det här examensarbetet har lokalisering av gångtrafikanter med hjälp av Hidden Markov Models utförts. Lokaliseringen är byggd på data från Wi-Fi sensorer i ett område i Stockholm. Området är modellerat som ett graf-baserat nätverk där linjerna mellan noderna representerar möjliga vägar för en person att befinna sig på. Resultatet för varje individ är aggregerat för att visa förväntat antal personer på varje segment över en hel dag. Två metoder för att analysera hur event påverkar området introduceras och beskrivs. Den första är baserad på tidsserieanalys och den andra är en maskinlärningsmetod som bygger på Baum-Welch algoritmen. Båda metoderna visar vilka segment som drabbas mest av en snabb ökning av trafik i området och var trängsel är troligt att förekomma.
|
147 |
Off-line signature verification using ensembles of local Radon transform-based HMMsPanton, Mark Stuart 03 1900 (has links)
Thesis (MSc)--Stellenbosch University, 2011. / ENGLISH ABSTRACT: An off-line signature verification system attempts to authenticate the identity
of an individual by examining his/her handwritten signature, after it has
been successfully extracted from, for example, a cheque, a debit or credit card
transaction slip, or any other legal document. The questioned signature is typically
compared to a model trained from known positive samples, after which
the system attempts to label said signature as genuine or fraudulent.
Classifier fusion is the process of combining individual classifiers, in order to
construct a single classifier that is more accurate, albeit computationally more
complex, than its constituent parts. A combined classifier therefore consists
of an ensemble of base classifiers that are combined using a specific fusion
strategy.
In this dissertation a novel off-line signature verification system, using a
multi-hypothesis approach and classifier fusion, is proposed. Each base classifier
is constructed from a hidden Markov model (HMM) that is trained from
features extracted from local regions of the signature (local features), as well as
from the signature as a whole (global features). To achieve this, each signature
is zoned into a number of overlapping circular retinas, from which said features
are extracted by implementing the discrete Radon transform. A global retina,
that encompasses the entire signature, is also considered.
Since the proposed system attempts to detect high-quality (skilled) forgeries,
it is unreasonable to assume that samples of these forgeries will be available
for each new writer (client) enrolled into the system. The system is therefore
constrained in the sense that only positive training samples, obtained
from each writer during enrolment, are available. It is however reasonable to
assume that both positive and negative samples are available for a representative
subset of so-called guinea-pig writers (for example, bank employees). These signatures constitute a convenient optimisation set that is used to select
the most proficient ensemble. A signature, that is claimed to belong to
a legitimate client (member of the general public), is therefore rejected or accepted
based on the majority vote decision of the base classifiers within the
most proficient ensemble.
When evaluated on a data set containing high-quality imitations, the inclusion
of local features, together with classifier combination, significantly increases
system performance. An equal error rate of 8.6% is achieved, which
compares favorably to an achieved equal error rate of 12.9% (an improvement
of 33.3%) when only global features are considered.
Since there is no standard international off-line signature verification data
set available, most systems proposed in the literature are evaluated on data
sets that differ from the one employed in this dissertation. A direct comparison
of results is therefore not possible. However, since the proposed system
utilises significantly different features and/or modelling techniques than those
employed in the above-mentioned systems, it is very likely that a superior combined
system can be obtained by combining the proposed system with any of
the aforementioned systems. Furthermore, when evaluated on the same data
set, the proposed system is shown to be significantly superior to three other
systems recently proposed in the literature. / AFRIKAANSE OPSOMMING: Die doel van ’n statiese handtekening-verifikasiestelsel is om die identiteit
van ’n individu te bekragtig deur sy/haar handgeskrewe handtekening te analiseer,
nadat dit suksesvol vanaf byvoorbeeld ’n tjek,’n debiet- of kredietkaattransaksiestrokie,
of enige ander wettige dokument onttrek is. Die bevraagtekende
handtekening word tipies vergelyk met ’n model wat afgerig is met bekende
positiewe voorbeelde, waarna die stelsel poog om die handtekening as eg
of vervals te klassifiseer.
Klassifiseerder-fusie is die proses waardeer individuele klassifiseerders gekombineer
word, ten einde ’n enkele klassifiseerder te konstrueer, wat meer akkuraat,
maar meer berekeningsintensief as sy samestellende dele is. ’n Gekombineerde
klassifiseerder bestaan derhalwe uit ’n ensemble van basis-klassifiseerders,
wat gekombineer word met behulp van ’n spesifieke fusie-strategie.
In hierdie projek word ’n nuwe statiese handtekening-verifikasiestelsel, wat
van ’n multi-hipotese benadering en klassifiseerder-fusie gebruik maak, voorgestel.
Elke basis-klassifiseerder word vanuit ’n verskuilde Markov-model (HMM)
gekonstrueer, wat afgerig word met kenmerke wat vanuit lokale gebiede in die
handtekening (lokale kenmerke), sowel as vanuit die handtekening in geheel
(globale kenmerke), onttrek is. Ten einde dit te bewerkstellig, word elke
handtekening in ’n aantal oorvleulende sirkulêre retinas gesoneer, waaruit kenmerke
onttrek word deur die diskrete Radon-transform te implementeer. ’n
Globale retina, wat die hele handtekening in beslag neem, word ook beskou.
Aangesien die voorgestelde stelsel poog om hoë-kwaliteit vervalsings op te
spoor, is dit onredelik om te verwag dat voorbeelde van hierdie handtekeninge
beskikbaar sal wees vir elke nuwe skrywer (kliënt) wat vir die stelsel registreer.
Die stelsel is derhalwe beperk in die sin dat slegs positiewe afrigvoorbeelde, wat
bekom is van elke skrywer tydens registrasie, beskikbaar is. Dit is egter redelik om aan te neem dat beide positiewe en negatiewe voorbeelde beskikbaar sal
wees vir ’n verteenwoordigende subversameling van sogenaamde proefkonynskrywers,
byvoorbeeld bankpersoneel. Hierdie handtekeninge verteenwoordig
’n gereieflike optimeringstel, wat gebruik kan word om die mees bekwame ensemble
te selekteer. ’n Handtekening, wat na bewering aan ’n wettige kliënt
(lid van die algemene publiek) behoort, word dus verwerp of aanvaar op grond
van die meerderheidstem-besluit van die basis-klassifiseerders in die mees bekwame
ensemble.
Wanneer die voorgestelde stelsel op ’n datastel, wat hoë-kwaliteit vervalsings
bevat, ge-evalueer word, verhoog die insluiting van lokale kenmerke en
klassifiseerder-fusie die prestasie van die stelsel beduidend. ’n Gelyke foutkoers
van 8.6% word behaal, wat gunstig vergelyk met ’n gelyke foutkoers van 12.9%
(’n verbetering van 33.3%) wanneer slegs globale kenmerke gebruik word.
Aangesien daar geen standard internasionale statiese handtekening-verifikasiestelsel
bestaan nie, word die meeste stelsels, wat in die literatuur voorgestel
word, op ander datastelle ge-evalueer as die datastel wat in dié projek gebruik
word. ’n Direkte vergelyking van resultate is dus nie moontlik nie. Desnieteenstaande,
aangesien die voorgestelde stelsel beduidend ander kenmerke
en/of modeleringstegnieke as dié wat in bogenoemde stelsels ingespan word gebruik,
is dit hoogs waarskynlik dat ’n superieure gekombineerde stelsel verkry
kan word deur die voorgestelde stelsel met enige van bogenoemde stelsels te
kombineer. Voorts word aangetoon dat, wanneer op dieselfde datastel geevalueerword,
die voorgestelde stelstel beduidend beter vaar as drie ander
stelsels wat onlangs in die literatuur voorgestel is.
|
148 |
Efficient Decoding of High-order Hidden Markov ModelsEngelbrecht, Herman A. 12 1900 (has links)
Thesis (PhD (Electrical and Electronic Engineering))--University of Stellenbosch, 2007. / Most speech recognition and language identification engines are based on hidden Markov
models (HMMs). Higher-order HMMs are known to be more powerful than first-order
HMMs, but have not been widely used because of their complexity and computational
demands. The main objective of this dissertation was to develop a more time-efficient
method of decoding high-order HMMs than the standard Viterbi decoding algorithm
currently in use.
We proposed, implemented and evaluated two decoders based on the Forward-Backward
Search (FBS) paradigm, which incorporate information obtained from low-order HMMs.
The first decoder is based on time-synchronous Viterbi-beam decoding where we wish
to base our state pruning on the complete observation sequence. The second decoder is
based on time-asynchronous A* search. The choice of heuristic is critical to the A* search
algorithms and a novel, task-independent heuristic function is presented. The experimental
results show that both these proposed decoders result in more time-efficient decoding
of the fully-connected, high-order HMMs that were investigated.
Three significant facts have been uncovered. The first is that conventional forward
Viterbi-beam decoding of high-order HMMs is not as computationally expensive as is
commonly thought.
The second (and somewhat surprising) fact is that backward decoding of conventional,
high-order left-context HMMs is significantly more expensive than the conventional forward
decoding. By developing the right-context HMM, we showed that the backward
decoding of a mathematically equivalent right-context HMM is as expensive as the forward
decoding of the left-context HMM.
The third fact is that the use of information obtained from low-order HMMs significantly
reduces the computational expense of decoding high-order HMMs. The comparison
of the two new decoders indicate that the FBS-Viterbi-beam decoder is more time-efficient
than the A* decoder. The FBS-Viterbi-beam decoder is not only simpler to implement,
it also requires less memory than the A* decoder.
We suspect that the broader research community regards the Viterbi-beam algorithm
as the most efficient method of decoding HMMs. We hope that the research presented
in this dissertation will result in renewed investigation into decoding algorithms that are
applicable to high-order HMMs.
|
149 |
Improved Bayesian methods for detecting recombination and rate heterogeneity in DNA sequence alignmentsMantzaris, Alexander Vassilios January 2011 (has links)
DNA sequence alignments are usually not homogeneous. Mosaic structures may result as a consequence of recombination or rate heterogeneity. Interspecific recombination, in which DNA subsequences are transferred between different (typically viral or bacterial) strains may result in a change of the topology of the underlying phylogenetic tree. Rate heterogeneity corresponds to a change of the nucleotide substitution rate. Various methods for simultaneously detecting recombination and rate heterogeneity in DNA sequence alignments have recently been proposed, based on complex probabilistic models that combine phylogenetic trees with factorial hidden Markov models or multiple changepoint processes. The objective of my thesis is to identify potential shortcomings of these models and explore ways of how to improve them. One shortcoming that I have identified is related to an approximation made in various recently proposed Bayesian models. The Bayesian paradigm requires the solution of an integral over the space of parameters. To render this integration analytically tractable, these models assume that the vectors of branch lengths of the phylogenetic tree are independent among sites. While this approximation reduces the computational complexity considerably, I show that it leads to the systematic prediction of spurious topology changes in the Felsenstein zone, that is, the area in the branch lengths configuration space where maximum parsimony consistently infers the wrong topology due to long-branch attraction. I demonstrate these failures by using two Bayesian hypothesis tests, based on an inter- and an intra-model approach to estimating the marginal likelihood. I then propose a revised model that addresses these shortcomings, and demonstrate its improved performance on a set of synthetic DNA sequence alignments systematically generated around the Felsenstein zone. The core model explored in my thesis is a phylogenetic factorial hidden Markov model (FHMM) for detecting two types of mosaic structures in DNA sequence alignments, related to recombination and rate heterogeneity. The focus of my work is on improving the modelling of the latter aspect. Earlier research efforts by other authors have modelled different degrees of rate heterogeneity with separate hidden states of the FHMM. Their work fails to appreciate the intrinsic difference between two types of rate heterogeneity: long-range regional effects, which are potentially related to differences in the selective pressure, and the short-term periodic patterns within the codons, which merely capture the signature of the genetic code. I have improved these earlier phylogenetic FHMMs in two respects. Firstly, by sampling the rate vector from the posterior distribution with RJMCMC I have made the modelling of regional rate heterogeneity more flexible, and I infer the number of different degrees of divergence directly from the DNA sequence alignment, thereby dispensing with the need to arbitrarily select this quantity in advance. Secondly, I explicitly model within-codon rate heterogeneity via a separate rate modification vector. In this way, the within-codon effect of rate heterogeneity is imposed on the model a priori, which facilitates the learning of the biologically more interesting effect of regional rate heterogeneity a posteriori. I have carried out simulations on synthetic DNA sequence alignments, which have borne out my conjecture. The existing model, which does not explicitly include the within-codon rate variation, has to model both effects with the same modelling mechanism. As expected, it was found to fail to disentangle these two effects. On the contrary, I have found that my new model clearly separates within-codon rate variation from regional rate heterogeneity, resulting in more accurate predictions.
|
150 |
Gestion de la variabilité morphologique pour la reconnaissance de gestes naturels à partir de données 3D / Addressing morphological variability for natural gesture recognition from 3D dataSorel, Anthony 06 December 2012 (has links)
La reconnaissance de mouvements naturels est de toute première importance dans la mise en oeuvre d’Interfaces Homme-Machine intelligentes et efficaces, utilisables de manière intuitive en environnement virtuel. En effet, elle permet à l’utilisateur d’agir de manière naturelle et au système de reconnaitre les mouvements corporel effectués tels qu’ils seraient perçu par un humain. Cette tâche est complexe, car elle demande de relever plusieurs défis : prendre en compte les spécificités du dispositif d’acquisition des données de mouvement, gérer la variabilité cinématique dans l’exécution du mouvement, et enfin gérer les différences morphologiques inter-individuelles, de sorte que les mouvements de tout nouvel utilisateur puissent être reconnus. De plus, de part la nature interactive des environnements virtuels, cette reconnaissancedoit pouvoir se faire en temps-réel, sans devoir attendre la fin du mouvement. La littérature scientifique propose de nombreuses méthodes pour répondre aux deux premiers défis mais la gestion de la variabilité morphologique est peu abordée. Dans cette thèse, nous proposons une description du mouvement permettant de répondre à cette problématique et évaluons sa capacité à reconnaitre les mouvements naturels d’un utilisateur inconnu. Enfin, nous proposons unenouvelle méthode permettant de tirer partie de cette représentation dans une reconnaissance précoce du mouvement / Recognition of natural movements is of utmost importance in the implementation of intelligent and effective Human-Machine Interfaces for virtual environments. It allows the user to behave naturally and the system to recognize its body movements in the same way a human might perceive it. This task is complex, because it addresses several challenges : take account of the specificities of the motion capture system, manage kinematic variability in motion performance, and finally take account of the morphological differences between individuals, so that actions of any new user can be recognized. Moreover, due to the interactive nature of virtual environments, this recognition must be achieved in real-time without waiting for the motion end. The literature offers many methods to meet the first two challenges. But the management of the morphological variability is not dealt. In this thesis, we propose a description of the movement to address this issue and we evaluate its ability to recognize the movements of an unknown user. Finally, we propose a new method to take advantage of this representation in early motion recognition
|
Page generated in 0.056 seconds