• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 114
  • 19
  • 18
  • 17
  • 11
  • 2
  • 2
  • 2
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • Tagged with
  • 237
  • 237
  • 237
  • 67
  • 46
  • 42
  • 25
  • 24
  • 24
  • 21
  • 21
  • 20
  • 20
  • 19
  • 19
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
191

The optimization of gesture recognition techniques for resource-constrained devices

Niezen, Gerrit 26 January 2009 (has links)
Gesture recognition is becoming increasingly popular as an input mechanism for human-computer interfaces. The availability of MEMS (Micro-Electromechanical System) 3-axis linear accelerometers allows for the design of an inexpensive mobile gesture recognition system. Wearable inertial sensors are a low-cost, low-power solution to recognize gestures and, more generally, track the movements of a person. Gesture recognition algorithms have traditionally only been implemented in cases where ample system resources are available, i.e. on desktop computers with fast processors and large amounts of memory. In the cases where a gesture recognition algorithm has been implemented on a resource-constrained device, only the simplest algorithms were implemented to recognize only a small set of gestures. Current gesture recognition technology can be improved by making algorithms faster, more robust, and more accurate. The most dramatic results in optimization are obtained by completely changing an algorithm to decrease the number of computations. Algorithms can also be optimized by profiling or timing the different sections of the algorithm to identify problem areas. Gestures have two aspects of signal characteristics that make them difficult to recognize: segmentation ambiguity and spatio-temporal variability. Segmentation ambiguity refers to not knowing the gesture boundaries, and therefore reference patterns have to be matched with all possible segments of input signals. Spatio-temporal variability refers to the fact that each repetition of the same gesture varies dynamically in shape and duration, even for the same gesturer. The objective of this study was to evaluate the various gesture recognition algorithms currently in use, after which the most suitable algorithm was optimized in order to implement it on a mobile device. Gesture recognition techniques studied include hidden Markov models, artificial neural networks and dynamic time warping. A dataset for evaluating the gesture recognition algorithms was gathered using a mobile device’s embedded accelerometer. The algorithms were evaluated based on computational efficiency, recognition accuracy and storage efficiency. The optimized algorithm was implemented in a user application on the mobile device to test the empirical validity of the study. / Dissertation (MEng)--University of Pretoria, 2009. / Electrical, Electronic and Computer Engineering / unrestricted
192

Phoneme duration modelling for speaker verification

Van Heerden, Charl Johannes 26 June 2009 (has links)
Higher-level features are considered to be a potential remedy against transmission line and cross-channel degradations, currently some of the biggest problems associated with speaker verification. Phoneme durations in particular are not altered by these factors; thus a robust duration model will be a particularly useful addition to traditional cepstral based speaker verification systems. In this dissertation we investigate the feasibility of phoneme durations as a feature for speaker verification. Simple speaker specific triphone duration models are created to statistically represent the phoneme durations. Durations are obtained from an automatic hidden Markov model (HMM) based automatic speech recognition system and are modeled using single mixture Gaussian distributions. These models are applied in a speaker verification system (trained and tested on the YOHO corpus) and found to be a useful feature, even when used in isolation. When fused with acoustic features, verification performance increases significantly. A novel speech rate normalization technique is developed in order to remove some of the inherent intra-speaker variability (due to differing speech rates). Speech rate variability has a negative impact on both speaker verification and automatic speech recognition. Although the duration modelling seems to benefit only slightly from this procedure, the fused system performance improvement is substantial. Other factors known to influence the duration of phonemes are incorporated into the duration model. Utterance final lengthening is known be a consistent effect and thus “position in sentence” is modeled. “Position in word” is also modeled since triphones do not provide enough contextual information. This is found to improve performance since some vowels’ duration are particularly sensitive to its position in the word. Data scarcity becomes a problem when building speaker specific duration models. By using information from available data, unknown durations can be predicted in an attempt to overcome the data scarcity problem. To this end we develop a novel approach to predict unknown phoneme durations from the values of known phoneme durations for a particular speaker, based on the maximum likelihood criterion. This model is based on the observation that phonemes from the same broad phonetic class tend to co-vary strongly, but that there is also significant cross-class correlations. This approach is tested on the TIMIT corpus and found to be more accurate than using back-off techniques. / Dissertation (MEng)--University of Pretoria, 2009. / Electrical, Electronic and Computer Engineering / unrestricted
193

Prediction and Analysis of Nucleosome Positions in DNA / Prediction and Analysis of Nucleosome Positions in DNA

Višňovský, Marek January 2013 (has links)
Eukaryotní DNA se váže kolem nukleozomů, čím ovplyvnuje vyšši strukturu DNA a přístup k vazebním mistům pro všeobecní transkripční faktory a oblasti genů. Je proto důležité vědet, kde se nukleozomy vážou na DNA, a jak silná tato vazba je, abychom mohli porozumět mechanizmům regulace genů. V rámci projektu byla implementována nová metoda pro predikci nukleozomů založená na rozšíření Skrytých Markovových modelů, kde jako trénovací a testovací sada posloužila publikována data z Brogaard et al. (Brogaard K, Wang J-P, Widom, J. Nature 486(7404), 496-501 (2012). doi:10.1038/nature11142). Správne predikováno bylo zhruba 50% nukleozomů, co je porovnatenlný výsledek s existujícimi metodami. Okrem toho byla provedena řada experimentů popisující vlastnosti sekvencí nukleozomů a ich organizace.
194

Zvyšování účinnosti strojového rozpoznávání řeči / Enhancing the effectiveness of automatic speech recognition

Zelinka, Petr January 2012 (has links)
This work identifies the causes for unsatisfactory reliability of contemporary systems for automatic speech recognition when deployed in demanding conditions. The impact of the individual sources of performance degradation is documented and a list of known methods for their identification from the recognized signal is given. An overview of the usual methods to suppress the impact of the disruptive influences on the performance of speech recognition is provided. The essential contribution of the work is the formulation of new approaches to constructing acoustical models of noisy speech and nonstationary noise allowing high recognition performance in challenging conditions. The viability of the proposed methods is verified on an isolated-word speech recognizer utilizing several-hour-long recording of the real operating room background acoustical noise recorded at the Uniklinikum Marburg in Germany. This work is the first to identify the impact of changes in speaker’s vocal effort on the reliability of automatic speech recognition in the full vocal effort range (i.e. whispering through shouting). A new concept of a speech recognizer immune to the changes in vocal effort is proposed. For the purposes of research on changes in vocal effort, a new speech database, BUT-VE1, was created.
195

Detekce genů v DNA sekvencích / Gene Detection in DNA Sequences

Roubalík, Zbyněk January 2011 (has links)
Gene detection in DNA sequences is one of the most difficult problems, which have been currently solved in bioinformatics. This thesis deals with gene detection in DNA sequences with methods using Hidden Markov Models. It contains a brief description of the fundamental principles of molecular biology, explains how genetic information is stored in DNA sequences, as well as the theoretical basis of the Hidden Markov Models. Further is described subsequent approach in the design of specific Hidden Markov Models for solving the problem of gene detection in DNA sequences. Is designed and implemented application, which uses previously designed Hidden Markov model for gene detection. This application is tested on the real data, results of these tests are discussed in the end of the thesis, as well as the possible extension and continuation of the project.
196

Preliminary study for detection and classification of swallowing sound / Étude préliminaire de détection et classification des sons de la déglutition

Khlaifi, Hajer 21 May 2019 (has links)
Les maladies altérant le processus de la déglutition sont multiples, affectant la qualité de vie du patient et sa capacité de fonctionner en société. La nature exacte et la gravité des changements post/pré-traitement dépendent de la localisation de l’anomalie. Une réadaptation efficace de la déglutition, cliniquement parlant, dépend généralement de l’inclusion d’une évaluation vidéo-fluoroscopique de la déglutition du patient dans l’évaluation post-traitement des patients en risque de fausse route. La restriction de cette utilisation est due au fait qu’elle est très invasive, comme d’autres moyens disponibles, tels que la fibre optique endoscopique. Ces méthodes permettent d’observer le déroulement de la déglutition et d’identifier les lieux de dysfonctionnement, durant ce processus, avec une précision élevée. "Mieux vaut prévenir que guérir" est le principe de base de la médecine en général. C’est dans ce contexte que se situe ce travail de thèse pour la télésurveillance des malades et plus spécifiquement pour suivre l’évolution fonctionnelle du processus de la déglutition chez des personnes à risques dysphagiques, que ce soit à domicile ou bien en institution, en utilisant le minimum de capteurs non-invasifs. C’est pourquoi le principal signal traité dans ce travail est le son. La principale problématique du traitement du signal sonore est la détection automatique du signal utile du son, étape cruciale pour la classification automatique de sons durant la prise alimentaire, en vue de la surveillance automatique. L’étape de la détection du signal utile permet de réduire la complexité du système d’analyse sonore. Les algorithmes issus de l’état de l’art traitant la détection du son de la déglutition dans le bruit environnemental n’ont pas montré une bonne performance. D’où l’idée d’utiliser un seuil adaptatif sur le signal, résultant de la décomposition en ondelettes. Les problématiques liées à la classification des sons en général et des sons de la déglutition en particulier sont abordées dans ce travail avec une analyse hiérarchique, qui vise à identifier dans un premier temps les segments de sons de la déglutition, puis à le décomposer en trois sons caractéristiques, ce qui correspond parfaitement à la physiologie du processus. Le couplage est également abordé dans ce travail. L’implémentation en temps réel de l’algorithme de détection a été réalisée. Cependant, celle de l’algorithme de classification reste en perspective. Son utilisation en clinique est prévue. / The diseases affecting and altering the swallowing process are multi-faceted, affecting the patient’s quality of life and ability to perform well in society. The exact nature and severity of the pre/post-treatment changes depend on the location of the anomaly. Effective swallowing rehabilitation, clinically depends on the inclusion of a video-fluoroscopic evaluation of the patient’s swallowing in the post-treatment evaluation. There are other available means such as endoscopic optical fibre. The drawback of these evaluation approaches is that they are very invasive. However, these methods make it possible to observe the swallowing process and identify areas of dysfunction during the process with high accuracy. "Prevention is better than cure" is the fundamental principle of medicine in general. In this context, this thesis focuses on remote monitoring of patients and more specifically monitoring the functional evolution of the swallowing process of people at risk of dysphagia, whether at home or in medical institutions, using the minimum number of non-invasive sensors. This has motivated the monitoring of the swallowing process based on the capturing only the acoustic signature of the process and modeling the process as a sequence of acoustic events occuring within a specific time frame. The main problem of such acoustic signal processing is the automatic detection of the relevent sound signals, a crucial step in the automatic classification of sounds during food intake for automatic monitoring. The detection of relevant signal reduces the complexity of the subsequent analysis and characterisation of a particular swallowing process. The-state-of-the-art algorithms processing the detection of the swallowing sounds as distinguished from environmental noise were not sufficiently accurate. Hence, the idea occured of using an adaptive threshold on the signal resulting from wavelet decomposition. The issues related to the classification of sounds in general and swallowing sounds in particular are addressed in this work with a hierarchical analysis that aims to first identify the swallowing sound segments and then to decompose them into three characteristic sounds, consistent with the physiology of the process. The coupling between detection and classification is also addressed in this work. The real-time implementation of the detection algorithm has been carried out. However, clinical use of the classification is discussed with a plan for its staged deployment subject to normal processes of clinical approval.
197

Décomposition en temps réel de signaux iEMG : filtrage bayésien implémenté sur GPU / On-line decomposition of iEMG signals using GPU-implemented Bayesian filtering

Yu, Tianyi 28 January 2019 (has links)
Un algorithme de décomposition des unités motrices constituant un signal électromyographiques intramusculaires (iEMG) a été proposé au laboratoire LS2N. Il s'agit d'un filtrage bayésien estimant l'état d'un modèle de Markov caché. Cet algorithme demande beaucoup de temps d'execution, même pour un signal ne contenant que 4 unités motrices. Dans notre travail, nous avons d'abord validé cet algorithme dans une structure série. Nous avons proposé quelques modifications pour le modèle de recrutement des unités motrices et implémenté deux techniques de pré-traitement pour améliorer la performance de l'algorithme. Le banc de filtres de Kalman a été remplacé par un banc de filtre LMS. Le filtre global consiste en l'examen de divers scénarios arborescents d'activation des unités motrices: on a introduit deux techniques heuristiques pour élaguer les divers scénarios. On a réalisé l'implémentation GPU de cet algorithme à structure parallèle intrinsèque. On a réussi la décomposition de 10 signaux expérimentaux enregistrés sur deux muscules, respectivement avec électrode aiguille et électrode filaire. Le nombre d'unités motrices est de 2 à 8. Le pourcentage de superposition des potentiels d'unité motrice, qui représente la complexité de signal, varie de 6.56 % à 28.84 %. La précision de décomposition de tous les signaux sont plus que 90 %, sauf deux signaux en 30 % MVC , sauf pour deux signaux qui sont à 30 % MVC et dont la précision de décomposition est supérieure à 85%. Nous sommes les premiers à réaliser la décomposition en temps réel pour un signal constitué de 10 unités motrices. / :A sequential decomposition algorithm based on a Hidden Markov Model of the EMG, that used Bayesian filtering to estimate the unknown parameters of discharge series of motor units was previously proposed in the laboratory LS2N. This algorithm has successfully decomposed the experimental iEMG signal with four motor units. However, the proposed algorithm demands a high time consuming. In this work, we firstly validated the proposed algorithm in a serial structure. We proposed some modifications for the activation process of the recruitment model in Hidden Markov Model and implemented two signal pre-processing techniques to improve the performance of the algorithm. Then, we realized a GPU-oriented implementation of this algorithm, as well as the modifications applied to the original model in order to achieve a real-time performance. We have achieved the decomposition of 10 experimental iEMG signals acquired from two different muscles, respectively by fine wire electrodes and needle electrodes. The number of motor units ranges from 2 to 8. The percentage of superposition, representing the complexity of iEMG signal, ranges from 6.56 % to 28.84 %. The accuracies of almost all experimental iEMG signals are more than90 %, except two signals at 30 % MVC (more than 85 %). Moreover, we realized the realtime decomposition for all these experimental signals by the parallel implementation. We are the first one that realizes the real time full decomposition of single channel iEMG signal with number of MUs up to 10, where full decomposition means resolving the superposition problem. For the signals with more than 10 MUs, we can also decompose them quickly, but not reaching the real time level.
198

Robustesse de la stratégie de trading optimale / Robustness of the optimal trading strategy

Bel Hadj Ayed, Ahmed 12 April 2016 (has links)
L’objectif principal de cette thèse est d’apporter de nouveaux résultats théoriques concernant la performance d’investissements basés sur des modèles stochastiques. Pour ce faire, nous considérons la stratégie optimale d’investissement dans le cadre d’un modèle d’actif risqué à volatilité constante et dont la tendance est un processus caché d’Ornstein Uhlenbeck. Dans le premier chapitre,nous présentons le contexte et les objectifs de cette étude. Nous présentons, également, les différentes méthodes utilisées, ainsi que les principaux résultats obtenus. Dans le second chapitre, nous nous intéressons à la faisabilité de la calibration de la tendance. Nous répondons à cette question avec des résultats analytiques et des simulations numériques. Nous clôturons ce chapitre en quantifiant également l’impact d’une erreur de calibration sur l’estimation de la tendance et nous exploitons les résultats pour détecter son signe. Dans le troisième chapitre, nous supposons que l’agent est capable de bien calibrer la tendance et nous étudions l’impact qu’a la non-observabilité de la tendance sur la performance de la stratégie optimale. Pour cela, nous considérons le cas d’une utilité logarithmique et d’une tendance observée ou non. Dans chacun des deux cas, nous explicitons la limite asymptotique de l’espérance et la variance du rendement logarithmique en fonction du ratio signal-sur-bruit et de la vitesse de retour à la moyenne de la tendance. Nous concluons cette étude en montrant que le ratio de Sharpe asymptotique de la stratégie optimale avec observations partielles ne peut dépasser 2/(3^1.5)∗100% du ratio de Sharpe asymptotique de la stratégie optimale avec informations complètes. Le quatrième chapitre étudie la robustesse de la stratégie optimale avec une erreur de calibration et compare sa performance à une stratégie d’analyse technique. Pour y parvenir, nous caractérisons, de façon analytique,l’espérance asymptotique du rendement logarithmique de chacune de ces deux stratégies. Nous montrons, grâce à nos résultats théoriques et à des simulations numériques, qu’une stratégie d’analyse technique est plus robuste que la stratégie optimale mal calibrée. / The aim of this thesis is to study the robustness of the optimal trading strategy. The setting we consider is that of a stochastic asset price model where the trend follows an unobservable Ornstein-Uhlenbeck process. In the first chapter, the background and the objectives of this study are presented along with the different methods used and the main results obtained. The question addressed in the second chapter is the estimation of the trend of a financial asset, and the impact of misspecification. Motivated by the use of Kalman filtering as a forecasting tool, we study the problem of parameters estimation, and measure the effect of parameters misspecification. Numerical examples illustrate the difficulty of trend forecasting in financial time series. The question addressed in the third chapter is the performance of the optimal strategy,and the impact of partial information. We focus on the optimal strategy with a logarithmic utility function under full or partial information. For both cases, we provide the asymptotic expectation and variance of the logarithmic return as functions of the signal-to-noise ratio and of the trend mean reversion speed. Finally, we compare the asymptotic Sharpe ratios of these strategies in order to quantify the loss of performance due to partial information. The aim of the fourth chapter is to compare the performances of the optimal strategy under parameters mis-specification and of a technical analysis trading strategy. For both strategies, we provide the asymptotic expectation of the logarithmic return as functions of the model parameters. Finally, numerical examples find that an investment strategy using the cross moving averages rule is more robust than the optimal strategy under parameters misspecification.
199

A Classification Tool for Predictive Data Analysis in Healthcare

Victors, Mason Lemoyne 07 March 2013 (has links) (PDF)
Hidden Markov Models (HMMs) have seen widespread use in a variety of applications ranging from speech recognition to gene prediction. While developed over forty years ago, they remain a standard tool for sequential data analysis. More recently, Latent Dirichlet Allocation (LDA) was developed and soon gained widespread popularity as a powerful topic analysis tool for text corpora. We thoroughly develop LDA and a generalization of HMMs and demonstrate the conjunctive use of both methods in predictive data analysis for health care problems. While these two tools (LDA and HMM) have been used in conjunction previously, we use LDA in a new way to reduce the dimensionality involved in the training of HMMs. With both LDA and our extension of HMM, we train classifiers to predict development of Chronic Kidney Disease (CKD) in the near future.
200

Arabic text recognition of printed manuscripts. Efficient recognition of off-line printed Arabic text using Hidden Markov Models, Bigram Statistical Language Model, and post-processing.

Al-Muhtaseb, Husni A. January 2010 (has links)
Arabic text recognition was not researched as thoroughly as other natural languages. The need for automatic Arabic text recognition is clear. In addition to the traditional applications like postal address reading, check verification in banks, and office automation, there is a large interest in searching scanned documents that are available on the internet and for searching handwritten manuscripts. Other possible applications are building digital libraries, recognizing text on digitized maps, recognizing vehicle license plates, using it as first phase in text readers for visually impaired people and understanding filled forms. This research work aims to contribute to the current research in the field of optical character recognition (OCR) of printed Arabic text by developing novel techniques and schemes to advance the performance of the state of the art Arabic OCR systems. Statistical and analytical analysis for Arabic Text was carried out to estimate the probabilities of occurrences of Arabic character for use with Hidden Markov models (HMM) and other techniques. Since there is no publicly available dataset for printed Arabic text for recognition purposes it was decided to create one. In addition, a minimal Arabic script is proposed. The proposed script contains all basic shapes of Arabic letters. The script provides efficient representation for Arabic text in terms of effort and time. Based on the success of using HMM for speech and text recognition, the use of HMM for the automatic recognition of Arabic text was investigated. The HMM technique adapts to noise and font variations and does not require word or character segmentation of Arabic line images. In the feature extraction phase, experiments were conducted with a number of different features to investigate their suitability for HMM. Finally, a novel set of features, which resulted in high recognition rates for different fonts, was selected. The developed techniques do not need word or character segmentation before the classification phase as segmentation is a byproduct of recognition. This seems to be the most advantageous feature of using HMM for Arabic text as segmentation tends to produce errors which are usually propagated to the classification phase. Eight different Arabic fonts were used in the classification phase. The recognition rates were in the range from 98% to 99.9% depending on the used fonts. As far as we know, these are new results in their context. Moreover, the proposed technique could be used for other languages. A proof-of-concept experiment was conducted on English characters with a recognition rate of 98.9% using the same HMM setup. The same techniques where conducted on Bangla characters with a recognition rate above 95%. Moreover, the recognition of printed Arabic text with multi-fonts was also conducted using the same technique. Fonts were categorized into different groups. New high recognition results were achieved. To enhance the recognition rate further, a post-processing module was developed to correct the OCR output through character level post-processing and word level post-processing. The use of this module increased the accuracy of the recognition rate by more than 1%. / King Fahd University of Petroleum and Minerals (KFUPM)

Page generated in 0.0673 seconds