• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 134
  • 42
  • 24
  • 14
  • 14
  • 6
  • 2
  • 2
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • Tagged with
  • 288
  • 288
  • 288
  • 63
  • 44
  • 43
  • 42
  • 36
  • 33
  • 33
  • 32
  • 31
  • 31
  • 28
  • 25
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
151

Genomic variation detection using dynamic programming methods

Zhao, Mengyao January 2014 (has links)
Thesis advisor: Gabor T. Marth / Background: Due to the rapid development and application of next generation sequencing (NGS) techniques, large amounts of NGS data have become available for genome-related biological research, such as population genetics, evolutionary research, and genome wide association studies. A crucial step of these genome-related studies is the detection of genomic variation between different species and individuals. Current approaches for the detection of genomic variation can be classified into alignment-based variation detection and assembly-based variation detection. Due to the limitation of current NGS read length, alignment-based variation detection remains the mainstream approach. The Smith-Waterman algorithm, which produces the optimal pairwise alignment between two sequences, is frequently used as a key component of fast heuristic read mapping and variation detection tools for next-generation sequencing data. Though various fast Smith-Waterman implementations are developed, they are either designed as monolithic protein database searching tools, which do not return detailed alignment, or they are embedded into other tools. These issues make reusing these efficient Smith-Waterman implementations impractical. After the alignment step in the traditional variation detection pipeline, the afterward variation detection using pileup data and the Bayesian model is also facing great challenges especially from low-complexity genomic regions. Sequencing errors and misalignment problems still influence variation detection (especially INDEL detection) a lot. The accuracy of genomic variation detection still needs to be improved, especially when we work on low- complexity genomic regions and low-quality sequencing data. Results: To facilitate easy integration of the fast Single-Instruction-Multiple-Data Smith-Waterman algorithm into third-party software, we wrote a C/C++ library, which extends Farrar's Striped Smith-Waterman (SSW) to return alignment information in addition to the optimal Smith-Waterman score. In this library we developed a new method to generate the full optimal alignment results and a suboptimal score in linear space at little cost of efficiency. This improvement makes the fast Single-Instruction-Multiple-Data Smith-Waterman become really useful in genomic applications. SSW is available both as a C/C++ software library, as well as a stand-alone alignment tool at: https://github.com/mengyao/Complete- Striped-Smith-Waterman-Library. The SSW library has been used in the primary read mapping tool MOSAIK, the split-read mapping program SCISSORS, the MEI detector TAN- GRAM, and the read-overlap graph generation program RZMBLR. The speeds of the mentioned software are improved significantly by replacing their ordinary Smith-Waterman or banded Smith-Waterman module with the SSW Library. To improve the accuracy of genomic variation detection, especially in low-complexity genomic regions and on low-quality sequencing data, we developed PHV, a genomic variation detection tool based on the profile hidden Markov model. PHV also demonstrates a novel PHMM application in the genomic research field. The banded PHMM algorithms used in PHV make it a very fast whole-genome variation detection tool based on the HMM method. The comparison of PHV to GATK, Samtools and Freebayes for detecting variation from both simulated data and real data shows PHV has good potential for dealing with sequencing errors and misalignments. PHV also successfully detects a 49 bp long deletion that is totally misaligned by the mapping tool, and neglected by GATK and Samtools. Conclusion: The efforts made in this thesis are very meaningful for methodology development in studies of genomic variation detection. The two novel algorithms stated here will also inspire future work in NGS data analysis. / Thesis (PhD) — Boston College, 2014. / Submitted to: Boston College. Graduate School of Arts and Sciences. / Discipline: Biology.
152

Mesure de la fragilité et détection de chutes pour le maintien à domicile des personnes âgées / Measure of frailty and fall detection for helping elderly people to stay at home

Dubois, Amandine 15 September 2014 (has links)
Le vieillissement de la population est un enjeu majeur pour les prochaines années en raison, notamment, de l'augmentation du nombre de personnes dépendantes. La question du maintien à domicile de ces personnes se pose alors, du fait de l'impossibilité pour les instituts spécialisés de les accueillir toutes et, surtout, de la volonté des personnes âgées de rester chez elles le plus longtemps possible. Or, le développement de systèmes technologiques peut aider à résoudre certains problèmes comme celui de la sécurisation en détectant les chutes, et de l'évaluation du degré d'autonomie pour prévenir les accidents. Plus particulièrement, nous nous intéressons au développement des systèmes ambiants, peu coûteux, pour l'équipement du domicile. Les caméras de profondeur permettent d'analyser en temps réel les déplacements de la personne. Nous montrons dans cette thèse qu'il est possible de reconnaître l'activité de la personne et de mesurer des paramètres de sa marche à partir de l'analyse de caractéristiques simples extraites des images de profondeur. La reconnaissance d'activité est réalisée à partir des modèles de Markov cachés, et permet en particulier de détecter les chutes et des activités à risque. Lorsque la personne marche, l'analyse de la trajectoire du centre de masse nous permet de mesurer les paramètres spatio-temporels pertinents pour l'évaluation de la fragilité de la personne. Ce travail a été réalisé sur la base d'expérimentations menées en laboratoire, d'une part, pour la construction des modèles par apprentissage automatique et, d'autre part, pour évaluer la validité des résultats. Les expérimentations ont montré que certains modèles de Markov cachés, développés pour ce travail, sont assez robustes pour classifier les différentes activités. Nous donnons, également dans cette thèse, la précision, obtenue avec notre système, des paramètres de la marche en comparaison avec un tapis actimètrique. Nous pensons qu'un tel système pourrait facilement être installé au domicile de personnes âgées, car il repose sur un traitement local des images. Il fournit, au quotidien, des informations sur l'analyse de l'activité et sur l'évolution des paramètres de la marche qui sont utiles pour sécuriser et évaluer le degré de fragilité de la personne. / Population ageing is a major issue for society in the next years, especially because of the increase of dependent people. The limits in specialized institutes capacity and the wish of the elderly to stay at home as long as possible explain a growing need for new specific at home services. Technologies can help securing the person at home by detecting falls. They can also help in the evaluation of the frailty for preventing future accidents. This work concerns the development of low cost ambient systems for helping the stay at home of elderly. Depth cameras allow analysing in real time the displacement of the person. We show that it is possible to recognize the activity of the person and to measure gait parameters from the analysis of simple feature extracted from depth images. Activity recognition is based on Hidden Markov Models and allows detecting at risk behaviours and falls. When the person is walking, the analysis of the trajectory of her centre of mass allows measuring gait parameters that can be used for frailty evaluation. This work is based on laboratory experimentations for the acquisition of data used for models training and for the evaluation of the results. We show that some of the developed Hidden Markov Models are robust enough for classifying the activities. We also evaluate de precision of the gait parameters measurement in comparison to the measures provided by an actimetric carpet. We believe that such a system could be installed in the home of the elderly because it relies on a local processing of the depth images. It would be able to provide daily information on the person activity and on the evolution of her gait parameters that are useful for securing her and evaluating her frailty
153

Detecção visual de atividade de voz com base na movimentação labial / Visual voice activity detection using as information the lips motion

Lopes, Carlos Bruno Oliveira January 2013 (has links)
O movimento dos lábios é um recurso visual relevante para a detecção da atividade de voz do locutor e para o reconhecimento da fala. Quando os lábios estão se movendo eles transmitem a idéia de ocorrências de diálogos (conversas ou períodos de fala) para o observador, enquanto que os períodos de silêncio podem ser representados pela ausência de movimentações dos lábios (boca fechada). Baseado nesta idéia, este trabalho foca esforços para detectar a movimentação de lábios e usá-la para realizar a detecção de atividade de voz. Primeiramente, é realizada a detecção de pele e a detecção de face para reduzir a área de extração dos lábios, sendo que as regiões mais prováveis de serem lábios são computadas usando a abordagem Bayesiana dentro da área delimitada. Então, a pré-segmentação dos lábios é obtida pela limiarização da região das probabilidades calculadas. A seguir, é localizada a região da boca pelo resultado obtido na pré-segmentação dos lábios, ou seja, alguns pixels que não são de lábios e foram detectados são eliminados, e em seguida são aplicados algumas operações morfológicas para incluir alguns pixels labiais e não labiais em torno da boca. Então, uma nova segmentação de lábios é realizada sobre a região da boca depois de aplicada uma transformação de cores para realçar a região a ser segmentada. Após a segmentação, é aplicado o fechamento das lacunas internas dos lábios segmentados. Finalmente, o movimento temporal dos lábios é explorado usando o modelo das cadeias ocultas de Markov (HMMs) para detectar as prováveis ocorrências de atividades de fala dentro de uma janela temporal. / Lips motion are relevant visual feature for detecting the voice active of speaker and speech recognition. When the lips are moving, they carries an idea of occurrence of dialogues (talk) or periods of speeches to the watcher, whereas the periods of silences may be represented by the absence of lips motion (mouth closed). Based on this idea, this work focus efforts to obtain the lips motion as features and to perform visual voice activity detection. First, the algorithm performs skin segmentation and face detection to reduce the search area for lip extraction, and the most likely lip regions are computed using a Bayesian approach within the delimited area. Then, the pre-segmentation of the lips is obtained by thresholding the calculated probability region. After, it is localized the mouth region by resulted obtained in pre-segmentation of the lips, i.e., some nonlips pixels detected are eliminated, and it are applied a simple morphological operators to include some lips pixels and non-lips around the mouth. Thus, a new segmentation of lips is performed over mouth region after transformation of color to enhance the region to be segmented. And, is applied the closing of gaps internal of lips segmented. Finally, the temporal motion of the lips is explored using Hidden Markov Models (HMMs) to detect the likely occurrence of active speech within a temporal window.
154

Analyse de la qualité des signatures manuscrites en-ligne par la mesure d'entropie / Quality analysis of online signatures based on entropy measure

Houmani, Nesma 13 January 2011 (has links)
Cette thèse s'inscrit dans le contexte de la vérification d'identité par la signature manuscrite en-ligne. Notre travail concerne plus particulièrement la recherche de nouvelles mesures qui permettent de quantifier la qualité des signatures en-ligne et d'établir des critères automatiques de fiabilité des systèmes de vérification. Nous avons proposé trois mesures de qualité faisant intervenir le concept d’entropie. Nous avons proposé une mesure de qualité au niveau de chaque personne, appelée «Entropie personnelle», calculée sur un ensemble de signatures authentiques d’une personne. L’originalité de l’approche réside dans le fait que l’entropie de la signature est calculée en estimant les densités de probabilité localement, sur des portions, par le biais d’un Modèle de Markov Caché. Nous montrons que notre mesure englobe les critères habituels utilisés dans la littérature pour quantifier la qualité d’une signature, à savoir: la complexité, la variabilité et la lisibilité. Aussi, cette mesure permet de générer, par classification non supervisée, des catégories de personnes, à la fois en termes de variabilité de la signature et de complexité du tracé. En confrontant cette mesure aux performances de systèmes de vérification usuels sur chaque catégorie de personnes, nous avons trouvé que les performances se dégradent de manière significative (d’un facteur 2 au minimum) entre les personnes de la catégorie «haute Entropie» (signatures très variables et peu complexes) et celles de la catégorie «basse Entropie» (signatures les plus stables et les plus complexes). Nous avons ensuite proposé une mesure de qualité basée sur l’entropie relative (distance de Kullback-Leibler), dénommée «Entropie Relative Personnelle» permettant de quantifier la vulnérabilité d’une personne aux attaques (bonnes imitations). Il s’agit là d’un concept original, très peu étudié dans la littérature. La vulnérabilité associée à chaque personne est calculée comme étant la distance de Kullback-Leibler entre les distributions de probabilité locales estimées sur les signatures authentiques de la personne et celles estimées sur les imitations qui lui sont associées. Nous utilisons pour cela deux Modèles de Markov Cachés, l'un est appris sur les signatures authentiques de la personne et l'autre sur les imitations associées à cette personne. Plus la distance de Kullback-Leibler est faible, plus la personne est considérée comme vulnérable aux attaques. Cette mesure est plus appropriée à l’analyse des systèmes biométriques car elle englobe en plus des trois critères habituels de la littérature, la vulnérabilité aux imitations. Enfin, nous avons proposé une mesure de qualité pour les signatures imitées, ce qui est totalement nouveau dans la littérature. Cette mesure de qualité est une extension de l’Entropie Personnelle adaptée au contexte des imitations: nous avons exploité l’information statistique de la personne cible pour mesurer combien la signature imitée réalisée par un imposteur va coller à la fonction de densité de probabilité associée à la personne cible. Nous avons ainsi défini la mesure de qualité des imitations comme étant la dissimilarité existant entre l'entropie associée à la personne à imiter et celle associée à l'imitation. Elle permet lors de l’évaluation des systèmes de vérification de quantifier la qualité des imitations, et ainsi d’apporter une information vis-à-vis de la résistance des systèmes aux attaques. Nous avons aussi montré l’intérêt de notre mesure d’Entropie Personnelle pour améliorer les performances des systèmes de vérification dans des applications réelles. Nous avons montré que la mesure d’Entropie peut être utilisée pour : améliorer la procédure d’enregistrement, quantifier la dégradation de la qualité des signatures due au changement de plateforme, sélectionner les meilleures signatures de référence, identifier les signatures aberrantes, et quantifier la pertinence de certains paramètres pour diminuer la variabilité temporelle. / This thesis is focused on the quality assessment of online signatures and its application to online signature verification systems. Our work aims at introducing new quality measures quantifying the quality of online signatures and thus establishing automatic reliability criteria for verification systems. We proposed three quality measures involving the concept of entropy, widely used in Information Theory. We proposed a novel quality measure per person, called "Personal Entropy" calculated on a set of genuine signatures of such a person. The originality of the approach lies in the fact that the entropy of the genuine signature is computed locally, on portions of such a signature, based on local density estimation by a Hidden Markov Model. We show that our new measure includes the usual criteria of the literature, namely: signature complexity, signature variability and signature legibility. Moreover, this measure allows generating, by an unsupervised classification, 3 coherent writer categories in terms of signature variability and complexity. Confronting this measure to the performance of two widely used verification systems (HMM, DTW) on each Entropy-based category, we show that the performance degrade significantly (by a factor 2 at least) between persons of "high Entropy-based category", containing the most variable and the least complex signatures and those of "low Entropy-based category", containing the most stable and the most complex signatures. We then proposed a novel quality measure based on the concept of relative entropy (also called Kullback-Leibler distance), denoted « Personal Relative Entropy » for quantifying person's vulnerability to attacks (good forgeries). This is an original concept and few studies in the literature are dedicated to this issue. This new measure computes, for a given writer, the Kullback-Leibler distance between the local probability distributions of his/her genuine signatures and those of his/her skilled forgeries: the higher the distance, the better the writer is protected from attacks. We show that such a measure simultaneously incorporates in a single quantity the usual criteria proposed in the literature for writer categorization, namely signature complexity, signature variability, as our Personal Entropy, but also the vulnerability criterion to skilled forgeries. This measure is more appropriate to biometric systems, because it makes a good compromise between the resulting improvement of the FAR and the corresponding degradation of FRR. We also proposed a novel quality measure aiming at quantifying the quality of skilled forgeries, which is totally new in the literature. Such a measure is based on the extension of our former Personal Entropy measure to the framework of skilled forgeries: we exploit the statistical information of the target writer for measuring to what extent an impostor’s hand-draw sticks to the target probability density function. In this framework, the quality of a skilled forgery is quantified as the dissimilarity existing between the target writer’s own Personal Entropy and the entropy of the skilled forgery sample. Our experiments show that this measure allows an assessment of the quality of skilled forgeries of the main online signature databases available to the scientific community, and thus provides information about systems’ resistance to attacks. Finally, we also demonstrated the interest of using our Personal Entropy measure for improving performance of online signature verification systems in real applications. We show that Personal Entropy measure can be used to: improve the enrolment process, quantify the quality degradation of signatures due to the change of platforms, select the best reference signatures, identify the outlier signatures, and quantify the relevance of times functions parameters in the context of temporal variability.
155

SPEAKER AND GENDER IDENTIFICATION USING BIOACOUSTIC DATA SETS

Jose, Neenu 01 January 2018 (has links)
Acoustic analysis of animal vocalizations has been widely used to identify the presence of individual species, classify vocalizations, identify individuals, and determine gender. In this work automatic identification of speaker and gender of mice from ultrasonic vocalizations and speaker identification of meerkats from their Close calls is investigated. Feature extraction was implemented using Greenwood Function Cepstral Coefficients (GFCC), designed exclusively for extracting features from animal vocalizations. Mice ultrasonic vocalizations were analyzed using Gaussian Mixture Models (GMM) which yielded an accuracy of 78.3% for speaker identification and 93.2% for gender identification. Meerkat speaker identification with Close calls was implemented using Gaussian Mixture Models (GMM) and Hidden Markov Models (HMM), with an accuracy of 90.8% and 94.4% respectively. The results obtained shows these methods indicate the presence of gender and identity information in vocalizations and support the possibility of robust gender identification and individual identification using bioacoustic data sets.
156

Simple And Complex Behavior Learning Using Behavior Hidden Markov Model And Cobart

Seyhan, Seyit Sabri 01 January 2013 (has links) (PDF)
In this thesis, behavior learning and generation models are proposed for simple and complex behaviors of robots using unsupervised learning methods. Simple behaviors are modeled by simple-behavior learning model (SBLM) and complex behaviors are modeled by complex-behavior learning model (CBLM) which uses previously learned simple or complex behaviors. Both models have common phases named behavior categorization, behavior modeling, and behavior generation. Sensory data are categorized using correlation based adaptive resonance theory network that generates motion primitives corresponding to robot&#039 / s base abilities in the categorization phase. In the modeling phase, Behavior-HMM, a modified version of hidden Markov model, is used to model the relationships among the motion primitives in a finite state stochastic network. In addition, a motion generator which is an artificial neural network is trained for each motion primitive to learn essential robot motor commands. In the generation phase, desired task is presented as a target observation and the model generates corresponding motion primitive sequence. Then, these motion primitives are executed successively by the motion generators which are specifically trained for the corresponding motion primitives. The models are not proposed for one specific behavior, but are intended to be bases for all behaviors. CBLM enhances learning capabilities by integrating previously learned behaviors hierarchically. Hence, new behaviors can take advantage of already discovered behaviors. The proposed models are tested on a robot simulator and the experiments showed that simple and complex-behavior learning models can generate requested behaviors effectively.
157

Autonomous Crop Segmentation, Characterisation and Localisation / Autonom Segmentering, Karakterisering och Lokalisering i Mandelplantager

Jagbrant, Gustav January 2013 (has links)
Orchards demand large areas of land, thus they are often situated far from major population centres. As a result it is often difficult to obtain the necessary personnel, limiting both growth and productivity. However, if autonomous robots could be integrated into the operation of the orchard, the manpower demand could be reduced. A key problem for any autonomous robot is localisation; how does the robot know where it is? In agriculture robots, the most common approach is to use GPS positioning. However, in an orchard environment, the dense and tall vegetation restricts the usage to large robots that reach above the surroundings. In order to enable the use of smaller robots, it is instead necessary to use a GPS independent system. However, due to the similarity of the environment and the lack of strong recognisable features, it appears unlikely that typical non-GPS solutions will prove successful. Therefore we present a GPS independent localisation system, specifically aimed for orchards, that utilises the inherent structure of the surroundings. Furthermore, we examine and individually evaluate three related sub-problems. The proposed system utilises a 3D point cloud created from a 2D LIDAR and the robot’s movement. First, we show how the data can be segmented into individual trees using a Hidden Semi-Markov Model. Second, we introduce a set of descriptors for describing the geometric characteristics of the individual trees. Third, we present a robust localisation method based on Hidden Markov Models. Finally, we propose a method for detecting segmentation errors when associating new tree measurements with previously measured trees. Evaluation shows that the proposed segmentation method is accurate and yields very few segmentation errors. Furthermore, the introduced descriptors are determined to be consistent and informative enough to allow localisation. Third, we show that the presented localisation method is robust both to noise and segmentation errors. Finally it is shown that a significant majority of all segmentation errors can be detected without falsely labeling correct segmentations as incorrect. / Eftersom fruktodlingar kräver stora markområden är de ofta belägna långt från större befolkningscentra. Detta gör det svårt att finna tillräckligt med arbetskraft och begränsar expansionsmöjligheterna. Genom att integrera autonoma robotar i drivandet av odlingarna skulle arbetet kunna effektiviseras och behovet av arbetskraft minska. Ett nyckelproblem för alla autonoma robotar är lokalisering; hur vet roboten var den är? I jordbruksrobotar är standardlösningen att använda GPS-positionering. Detta är dock problematiskt i fruktodlingar, då den höga och täta vegetationen begränsar användandet till större robotar som når ovanför omgivningen. För att möjliggöra användandet av mindre robotar är det istället nödvändigt att använda ett GPS-oberoende lokaliseringssystem. Detta problematiseras dock av den likartade omgivningen och bristen på distinkta riktpunkter, varför det framstår som osannolikt att existerande standardlösningar kommer fungera i denna omgivning. Därför presenterar vi ett GPS-oberoende lokaliseringssystem, speciellt riktat mot fruktodlingar, som utnyttjar den naturliga strukturen hos omgivningen.Därutöver undersöker vi och utvärderar tre relaterade delproblem. Det föreslagna systemet använder ett 3D-punktmoln skapat av en 2D-LIDAR och robotens rörelse. Först visas hur en dold semi-markovmodell kan användas för att segmentera datasetet i enskilda träd. Därefter introducerar vi ett antal deskriptorer för att beskriva trädens geometriska form. Vi visar därefter hur detta kan kombineras med en dold markovmodell för att skapa ett robust lokaliseringssystem.Slutligen föreslår vi en metod för att detektera segmenteringsfel när nya mätningar av träd associeras med tidigare uppmätta träd. De föreslagna metoderna utvärderas individuellt och visar på goda resultat. Den föreslagna segmenteringsmetoden visas vara noggrann och ge upphov till få segmenteringsfel. Därutöver visas att de introducerade deskriptorerna är tillräckligt konsistenta och informativa för att möjliggöra lokalisering. Ytterligare visas att den presenterade lokaliseringsmetoden är robust både mot brus och segmenteringsfel. Slutligen visas att en signifikant majoritet av alla segmenteringsfel kan detekteras utan att felaktigt beteckna korrekta segmenteringar som inkorrekta.
158

Unsupervised hidden Markov model for automatic analysis of expressed sequence tags

Alexsson, Andrei January 2011 (has links)
This thesis provides an in-depth analyze of expressed sequence tags (EST) that represent pieces of eukaryotic mRNA by using unsupervised hidden Markov model (HMM). ESTs are short nucleotide sequences that are used primarily for rapid identificationof new genes with potential coding regions (CDS). ESTs are made by sequencing on double-stranded cDNA and the synthesizedESTs are stored in digital form, usually in FASTA format. Since sequencing is often randomized and that parts of mRNA contain non-coding regions, some ESTs will not represent CDS.It is desired to remove these unwanted ESTs if the purpose is to identifygenes associated with CDS. Application of stochastic HMM allow identification of region contents in a EST. Softwares like ESTScanuse HMM in which a training of the HMM is done by supervised learning with annotated data. However, because there are not always annotated data at hand this thesis focus on the ability to train an HMM with unsupervised learning on data containing ESTs, both with and without CDS. But the data used for training is not annotated, i.e. the regions that an EST consists of are unknown. In this thesis a new HMM is introduced where the parameters of the HMM are in focus so that they are reasonablyconsistent with biologically important regionsof an mRNA such as the Kozak sequence, poly(A)-signals and poly(A)-tails to guide the training and decoding correctly with ESTs to proper statesin the HMM. Transition probabilities in the HMMhas been adapted so that it represents the mean length and distribution of the different regions in mRNA. Testing of the HMM's specificity and sensitivityhave been performed via BLAST by blasting each EST and compare the BLAST results with the HMM prediction results.A regression analysis test shows that the length of ESTs used when training the HMM is significantly important, the longer the better. The final resultsshows that it is possible to train an HMM with unsupervised machine learning but to be comparable to supervised machine learning as ESTScan, further expansion of the HMM is necessary such as frame-shift correction of ESTs byimproving the HMM's ability to choose correctly positioned start codons or nucleotides. Usually the false positive results are because of incorrectly positioned start codons leadingto too short CDS lengths. Since no frame-shift correction is implemented, short predicted CDS lengths are not acceptable and is hence not counted as coding regionsduring prediction. However, when there is a lack of supervised models then unsupervised HMM is a potential replacement with stable performance and able to be adapted forany eukaryotic organism.
159

A Design of Recognition Rate Improving Strategy for Japanese Speech Recognition System

Lin, Cheng-Hung 24 August 2010 (has links)
This thesis investigates the recognition rate improvement strategies for a Japanese speech recognition system. Both training data development and consonant correction scheme are studied. For training data development, a database of 995 two-syllable Japanese words is established by phonetic balanced sieving. Furthermore, feature models for the 188 common Japanese mono-syllables are derived through mixed position training scheme to increase recognition rate. For consonant correction, a sub-syllable model is developed to enhance the consonant recognition accuracy, and hence further improve the overall correct rate for the whole Japanese phrases. Experimental results indicate that the average correct rate for Japanese phrase recognition system with 34 thousand phrases can be improved from 86.91% to 92.38%.
160

Investment Decision Support with Dynamic Bayesian Networks

Wang, Sheng-chung 25 July 2005 (has links)
Stock market plays an important role in the modern capital market. As a result, the prediction of financial assets attracts people in different areas. Moreover, it is commonly accepted that stock price movement generally follows a major trend. As a result, forecasting the market trend becomes an important mission for a prediction method. Accordingly, we will predict the long term trend rather than the movement of near future or change in a trading day as the target of our predicting approach. Although there are various kinds of analyses for trend prediction, most of them use clear cuts or certain thresholds to classify the trends. Users (or investors) are not informed with the degrees of confidence associated with the recommendation or the trading signal. Therefore, in this research, we would like to study an approach that could offer the confidence of the trend analysis by providing the probabilities of each possible state given its historical data through Dynamic Bayesian Network. We will incorporate the well-known principles of Dow¡¦s Theory to better model the trend of stock movements. Through the results of our experiment, we may say that the financial performance of the proposed model is able to defeat the buy and hold trading strategy when the time scope covers the entire cycle of a trend. It also means that for the long term investors, our approach has high potential to win the excess return. At the same time, the trading frequency and correspondently trading costs can be reduced significantly.

Page generated in 0.0633 seconds