• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 131
  • 42
  • 24
  • 14
  • 14
  • 6
  • 2
  • 2
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • Tagged with
  • 285
  • 285
  • 285
  • 63
  • 44
  • 42
  • 42
  • 35
  • 33
  • 33
  • 31
  • 31
  • 30
  • 27
  • 25
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
11

A Design of Mandarin Keyword Spotting System

Wang, Yi-Lii 07 February 2003 (has links)
A Mandarin keyword spotting system based on LPC, VQ, discrete-time HMM and Viterbi algorithm is proposed in the thesis. Joining with a dialogue system, this keyword spotting platform is further refined to a prototype of Taiwan Railway Natural Language Reservation System. In the reservation process, five questions: name and ID number, departure station, destination station, train type and number of tickets, and time schedule are asked by the computer-dialogue attendant. Following by the customer¡¦s speech confirmation, electronic tickets can be correctly issued and printed within 90 seconds in a laboratory environment.
12

Clues from the beaten path : location estimation with bursty sequences of tourist photos / Location estimation with bursty sequences of tourist photos

Chen, Chao-Yeh 14 February 2012 (has links)
Existing methods for image-based location estimation generally attempt to recognize every photo independently, and their resulting reliance on strong visual feature matches makes them most suited for distinctive landmark scenes. We observe that when touring a city, people tend to follow common travel patterns---for example, a stroll down Wall Street might be followed by a ferry ride, then a visit to the Statue of Liberty or Ellis Island museum. We propose an approach that learns these trends directly from online image data, and then leverages them within a Hidden Markov Model to robustly estimate locations for novel sequences of tourist photos. We further devise a set-to-set matching-based likelihood that treats each ``burst" of photos from the same camera as a single observation, thereby better accommodating images that may not contain particularly distinctive scenes. Our experiments with two large datasets of major tourist cities clearly demonstrate the approach's advantages over traditional methods that recognize each photo individually, as well as a naive HMM baseline that lacks the proposed burst-based observation model. / text
13

Portfolio Optimization under Partial Information with Expert Opinions

Frey, Rüdiger, Gabih, Abdelali, Wunderlich, Ralf January 2012 (has links) (PDF)
This paper investigates optimal portfolio strategies in a market with partial information on the drift. The drift is modelled as a function of a continuous-time Markov chain with finitely many states which is not directly observable. Information on the drift is obtained from the observation of stock prices. Moreover, expert opinions in the form of signals at random discrete time points are included in the analysis. We derive the filtering equation for the return process and incorporate the filter into the state variables of the optimization problem. This problem is studied with dynamic programming methods. In particular, we propose a policy improvement method to obtain computable approximations of the optimal strategy. Numerical results are presented at the end. (author's abstract)
14

Automatic Extraction of Highlights from a Baseball Video Using HMM and MPEG-7 Descriptors

Saudagar, Abdullah Naseer Ahmed 05 1900 (has links)
In today’s fast paced world, as the number of stations of television programming offered is increasing rapidly, time accessible to watch them remains same or decreasing. Sports videos are typically lengthy and they appeal to a massive crowd. Though sports video is lengthy, most of the viewer’s desire to watch specific segments of the video which are fascinating, like a home-run in a baseball or goal in soccer i.e., users prefer to watch highlights to save time. When associated to the entire span of the video, these segments form only a minor share. Hence these videos need to be summarized for effective presentation and data management. This thesis explores the ability to extract highlights automatically using MPEG-7 features and hidden Markov model (HMM), so that viewing time can be reduced. Video is first segmented into scene shots, in which the detection of the shot is the fundamental task. After the video is segmented into shots, extraction of key frames allows a suitable representation of the whole shot. Feature extraction is crucial processing step in the classification, video indexing and retrieval system. Frame features such as color, motion, texture, edges are extracted from the key frames. A baseball highlight contains certain types of scene shots and these shots follow a particular transition pattern. The shots are classified as close-up, out-field, base and audience. I first try to identify the type of the shot using low level features extracted from the key frames of each shot. For the identification of the highlight I use the hidden Markov model using the transition pattern of the shots in time domain. Experimental results suggest that with reasonable accuracy highlights can be extracted from the video.
15

Predicting the Functional Effects of Human Short Variations Using Hidden Markov Models

Liu, Mingming 24 June 2015 (has links)
With the development of sequencing technologies, more and more sequence variants are available for investigation. Different types of variants in the human genome have been identified, including single nucleotide polymorphisms (SNPs), short insertions and deletions (indels), and large structural variations such as large duplications and deletions. Of great research interest is the functional effects of these variants. Although many programs have been developed to predict the effect of SNPs, few can be used to predict the effect of indels or multiple variants, such as multiple SNPs, multiple indels, or a combination of both. Moreover, fine grained prediction of the functional outcome of variants is not available. To address these limitations, we developed a prediction framework, HMMvar, to predict the functional effects of coding variants (SNPs or indels), using profile hidden Markov models (HMMs). Based on HMMvar, we proposed HMMvar-multi to explore the joint effects of multiple variants in the same gene. For fine grained functional outcome prediction, we developed HMMvar-func to computationally define and predict four types of functional outcome of a variant: gain, loss, switch, and conservation of function. / Ph. D.
16

Improving the performance of Hierarchical Hidden Markov Models on Information Extraction tasks

Chou, Lin-Yi January 2006 (has links)
This thesis presents novel methods for creating and improving hierarchical hidden Markov models. The work centers around transforming a traditional tree structured hierarchical hidden Markov model (HHMM) into an equivalent model that reuses repeated sub-trees. This process temporarily breaks the tree structure constraint in order to leverage the benefits of combining repeated sub-trees. These benefits include lowered cost of testing and an increased accuracy of the final model-thus providing the model with greater performance. The result is called a merged and simplified hierarchical hidden Markov model (MSHHMM). The thesis goes on to detail four techniques for improving the performance of MSHHMMs when applied to information extraction tasks, in terms of accuracy and computational cost. Briefly, these techniques are: a new formula for calculating the approximate probability of previously unseen events; pattern generalisation to transform observations, thus increasing testing speed and prediction accuracy; restructuring states to focus on state transitions; and an automated flattening technique for reducing the complexity of HHMMs. The basic model and four improvements are evaluated by applying them to the well-known information extraction tasks of Reference Tagging and Text Chunking. In both tasks, MSHHMMs show consistently good performance across varying sizes of training data. In the case of Reference Tagging, the accuracy of the MSHHMM is comparable to other methods. However, when the volume of training data is limited, MSHHMMs maintain high accuracy whereas other methods show a significant decrease. These accuracy gains were achieved without any significant increase in processing time. For the Text Chunking task the accuracy of the MSHHMM was again comparable to other methods. However, the other methods incurred much higher processing delays compared to the MSHHMM. The results of these practical experiments demonstrate the benefits of the new method-increased accuracy, lower computation costs, and better performance.
17

Evaluation of evidence for autocorrelated data, with an example relating to traces of cocaine on banknotes

Wilson, Amy Louise January 2014 (has links)
Much research in recent years for evidence evaluation in forensic science has focussed on methods for determining the likelihood ratio in various scenarios. One proposition concerning the evidence is put forward by the prosecution and another is put forward by the defence. The likelihood of each of these two propositions is calculated, given the evidence. The likelihood ratio, or value of the evidence, is then given by the ratio of the likelihoods associated with these two propositions. The aim of this research is twofold. Firstly, it is intended to provide methodology for the evaluation of the likelihood ratio for continuous autocorrelated data. The likelihood ratio is evaluated for two such scenarios. The first is when the evidence consists of data which are autocorrelated at lag one. The second, an extension to this, is when the observed evidential data are also believed to be driven by an underlying latent Markov chain. Two models have been developed to take these attributes into account, an autoregressive model of order one and a hidden Markov model, which does not assume independence of adjacent data points conditional on the hidden states. A nonparametric model which does not make a parametric assumption about the data and which accounts for lag one autocorrelation is also developed. The performance of these three models is compared to the performance of a model which assumes independence of the data. The second aim of the research is to develop models to evaluate evidence relating to traces of cocaine on banknotes, as measured by the log peak area of the ion count for cocaine product ion m/z 105, obtained using tandem mass spectrometry. Here, the prosecution proposition is that the banknotes are associated with a person who is involved with criminal activity relating to cocaine and the defence proposition is the converse, which is that the banknotes are associated with a person who is not involved with criminal activity relating to cocaine. Two data sets are available, one of banknotes seized in criminal investigations and associated with crime involving cocaine, and one of banknotes from general circulation. Previous methods for the evaluation of this evidence were concerned with the percentage of banknotes contaminated or assumed independence of measurements of quantities of cocaine on adjacent banknotes. It is known that nearly all banknotes have traces of cocaine on them and it was found that there was autocorrelation within samples of banknotes so thesemethods are not appropriate. The models developed for autocorrelated data are applied to evidence relating to traces of cocaine on banknotes; the results obtained for each of the models are compared using rates of misleading evidence, Tippett plots and scatter plots. It is found that the hiddenMarkov model is the best choice for themodelling of cocaine traces on banknotes because it has the lowest rate of misleading evidence and it also results in likelihood ratios which are large enough to give support to the prosecution proposition for some samples of banknotes seized from crime scenes. Comparison of the results obtained for models which take autocorrelation into account with the results obtained from the model which assumes independence indicate that not accounting for autocorrelation can result in the overstating of the likelihood ratio.
18

Automated protein-family classification based on hidden Markov models

Frisk, Christoffer January 2015 (has links)
The aim of the project presented in this paper was to investigate the possibility toautomatically sub-classify the superfamily of Short-chain Dehydrogenase/Reductases (SDR).This was done based on an algorithm previously designed to sub-classify the superfamily ofMedium-chain Dehydrogenase/Reductases (MDR). While the SDR-family is interesting andimportant to sub-classify there was also a focus on making the process as automatic aspossible so that future families also can be classified using the same methods.To validate the results generated it was compared to previous sub-classifications done on theSDR-family. The results proved promising and the work conducted here can be seen as a goodinitial part of a more comprehensive full investigation
19

Aeronautical Channel Modeling for Packet Network Simulators

Khanal, Sandarva 10 1900 (has links)
ITC/USA 2011 Conference Proceedings / The Forty-Seventh Annual International Telemetering Conference and Technical Exhibition / October 24-27, 2011 / Bally's Las Vegas, Las Vegas, Nevada / The introduction of network elements into telemetry systems brings a level of complexity that makes performance analysis difficult, if not impossible. Packet simulation is a well understood tool that enables performance prediction for network designs or for operational forecasting. Packet simulators must however be customized to incorporate aeronautical radio channels and other effects unique to the telemetry application. This paper presents a method for developing a Markov Model simulation for aeronautical channels for use in packet network simulators such as OPNET modeler. It shows how the Hidden Markov Model (HMM) and the Markov Model (MM) can be used together to first extract the channel behavior of an OFDM transmission for an aeronautical channel, and then effortlessly replicate the statistical behavior during simulations in OPENT Modeler. Results demonstrate how a simple Markov Model can capture the behavior of very complex combinations of channel and modulation conditions.
20

Secure Telemetry: Attacks and Counter Measures on iNET

Odesanmi, Abiola, Moten, Daryl 10 1900 (has links)
ITC/USA 2011 Conference Proceedings / The Forty-Seventh Annual International Telemetering Conference and Technical Exhibition / October 24-27, 2011 / Bally's Las Vegas, Las Vegas, Nevada / iNet is a project aimed at improving and modernizing telemetry systems by moving from a link to a networking solution. Changes introduce new risks and vulnerabilities. The nature of the security of the telemetry system changes when the elements are in an Ethernet and TCP/IP network configuration. The network will require protection from intrusion and malware that can be initiated internal to, or external of the network boundary. In this paper we will discuss how to detect and counter FTP password attacks using the Hidden Markov Model for intrusion detection. We intend to discover and expose the more subtle iNet network vulnerabilities and make recommendations for a more secure telemetry environment.

Page generated in 0.0915 seconds