• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 40
  • 11
  • 9
  • 8
  • 3
  • 2
  • 2
  • 1
  • 1
  • 1
  • 1
  • Tagged with
  • 94
  • 94
  • 84
  • 21
  • 18
  • 18
  • 16
  • 14
  • 14
  • 13
  • 13
  • 12
  • 12
  • 10
  • 10
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
51

The Use of bioinformatics techniques to perform time-series trend matching and prediction

Transell, Mark Marriott January 2012 (has links)
Process operators often have process faults and alarms due to recurring failures on process equipment. It is also the case that some processes do not have enough input information or process models to use conventional modelling or machine learning techniques for early fault detection. A proof of concept for online streaming prediction software based on matching process behaviour to historical motifs has been developed, making use of the Basic Local Alignment Search Tool (BLAST) used in the Bioinformatics field. Execution times of as low as 1 second have been recorded, demonstrating that online matching is feasible. Three techniques have been tested and compared in terms of their computational effciency, robustness and selectivity, with results shown in Table 1: • Symbolic Aggregate Approximation combined with PSI-BLAST • Naive Triangular Representation with PSI-BLAST • Dynamic Time Warping Table 1: Properties of different motif-matching methods Property SAX-PSIBLAST TER-PSIBLAST DTW Noise tolerance (Selectivity) Acceptable Inconclusive Good Vertical Shift tolerance None Perfect Poor Matching speed Acceptable Acceptable Fast Match speed scaling O < O(mn) O < O(mn) O(mn) Dimensionality Reduction Tolerance Good Inconclusive Acceptable It is recommended that a method using a weighted confidence measure for each technique be investigated for the purpose of online process event handling and operator alerts. Keywords: SAX, BLAST, motif-matching, Dynamic Time Warping / Dissertation (MEng)--University of Pretoria, 2012. / Chemical Engineering / unrestricted
52

A Novel Approach for Continuous Speech Tracking and Dynamic Time Warping. Adaptive Framing Based Continuous Speech Similarity Measure and Dynamic Time Warping using Kalman Filter and Dynamic State Model

Khan, Wasiq January 2014 (has links)
Dynamic speech properties such as time warping, silence removal and background noise interference are the most challenging issues in continuous speech signal matching. Among all of them, the time warped speech signal matching is of great interest and has been a tough challenge for the researchers. An adaptive framing based continuous speech tracking and similarity measurement approach is introduced in this work following a comprehensive research conducted in the diverse areas of speech processing. A dynamic state model is introduced based on system of linear motion equations which models the input (test) speech signal frame as a unidirectional moving object along the template speech signal. The most similar corresponding frame position in the template speech is estimated which is fused with a feature based similarity observation and the noise variances using a Kalman filter. The Kalman filter provides the final estimated frame position in the template speech at current time which is further used for prediction of a new frame size for the next step. In addition, a keyword spotting approach is proposed by introducing wavelet decomposition based dynamic noise filter and combination of beliefs. The Dempster’s theory of belief combination is deployed for the first time in relation to keyword spotting task. Performances for both; speech tracking and keyword spotting approaches are evaluated using the statistical metrics and gold standards for the binary classification. Experimental results proved the superiority of the proposed approaches over the existing methods. / The appendices files are not available online.
53

A COMPREHENSIVE FRAMEWORK FOR STROKE TRAJECTORY RECOVERY FOR UNCONSTRAINED HANDWRITTEN DOCUMENTS

Hanif, Sidra, 0000-0001-6531-7656 05 1900 (has links)
For a long time, handwriting analysis, such as handwriting recognition and signature verification, has been an active research area. There are two categories of handwriting, online and offline. Online handwriting is captured in real-time on a digital device such as a tablet screen with a stylus pen. In contrast, the handwritten text scanned or captured by a camera from a physical medium such as paper is referred to as offline handwriting. For offline handwriting, the input is limited to handwritten images, making handwriting analysis much more difficult. In our work, we proposed a Stroke Trajectory Recover (STR) for offline and unconstrained handwritten documents. For this purpose, we introduce large-scale word-level annotations for the English handwriting sampled from the IAM-online dataset. The current STR architectures for English handwriting use lines of text or characters of the alphabet as input. However, a word-level STR method estimates loss for each word rather than averaging DTW loss over the entire line of text. Furthermore, to avoid the stray points/artifacts in predicted stroke points, we employ a marginal Chamfer distance that penalizes large, easily noticeable deviations and artifacts. For word detection, we propose the fusion of character region scores with bounding box estimation. Since the character level annotations are not available for handwritten text, we estimate the character region scores in a weakly supervised manner. Character region scores are estimated autonomously from the word’s bounding box estimation to learn the character level information in handwriting. We propose to fuse the character region scores and images to detect words in camera-captured handwriting images. We also propose an automated evaluation to check the quality of the predicted stroke trajectory. The existing handwriting datasets have limited availability of stroke coordinates information. Hence, although the proposed system can be applied to handwriting datasets without stroke coordinates information, it is impossible to evaluate the quality of its predicted strokes using the existing methods. Therefore, in our work, we propose two measures for evaluating the quality of recovered stroke trajectories when ground truth stroke information is not given. First, we formulated an automated evaluation measure based on image matching by finding the difference between original and rendered images. We also evaluated the preservation of readability of words for original and rendered images with a transformer-based word recognition network. Since our proposed STR system works with words, we demonstrate that our method is scalable to unconstrained handwritten documents, i.e., full-page text. Finally, we present a probabilistic diffusion model conditioned on handwriting style template for generating writing strokes. In our work, we propose to learn the localized patches for handwriting style features from multiscale attention network. The multiscale attention network captures fine details about local character style and global handwriting style. Moreover, we train our diffusion model with the Dynamic Time Warping (DTW) loss function, along with the diffusion loss, which eliminates the need to train any auxiliary networks for text or writer style recognition and adversarial networks. / Computer and Information Science
54

M?todo de previs?o de vendas e estimativa de reposi??o de itens no varejo da moda

Santos, Graziele Marques Mazuco dos 26 April 2018 (has links)
Submitted by PPG Ci?ncia da Computa??o (ppgcc@pucrs.br) on 2018-06-19T12:25:43Z No. of bitstreams: 1 GRAZIELE_MARQUES_MAZUCO_DOS_SANTOS_DIS.pdf: 3857481 bytes, checksum: 9c3c88f01e8e5d920ba3bc8989d2cfbf (MD5) / Approved for entry into archive by Sheila Dias (sheila.dias@pucrs.br) on 2018-06-27T13:05:50Z (GMT) No. of bitstreams: 1 GRAZIELE_MARQUES_MAZUCO_DOS_SANTOS_DIS.pdf: 3857481 bytes, checksum: 9c3c88f01e8e5d920ba3bc8989d2cfbf (MD5) / Made available in DSpace on 2018-06-27T13:21:15Z (GMT). No. of bitstreams: 1 GRAZIELE_MARQUES_MAZUCO_DOS_SANTOS_DIS.pdf: 3857481 bytes, checksum: 9c3c88f01e8e5d920ba3bc8989d2cfbf (MD5) Previous issue date: 2018-04-26 / Demand forecasting is one of the most essential components of supply chain management. Forecasts are used both for long-term and for short-term. Long-term forecasts are important because it is difficult in terms of production to face the demand deviation in a short time, so the anticipation of prediction helps to increase the responsiveness of the supply chain. Short term forecasts are important for the demand monitoring aiming to keep healthy inventory levels. In the fashion industry, the high change of products, the short life cycle and the lack of historical data makes difficult accurate predictions. To deal with this problem, the literature presents three approaches: statistical, artificial intelligence and hybrid that combines statistical and artificial intelligence. This research presents a two-phased method: (1) long-term prediction, identifies the different life cycles in the products, allowing the identification of sales prototypes for each cluster and (2) short-term prediction, classifies new products in the clusters labeled in the long-term phase and adjusts the sales curve considering optimistic and pessimist factors. As a differential, the method is based in dynamic time warping, distance measure for time series. The method is tested in a real dataset with real data from fashion retailers that demonstrates the quality of the contribution. / A previs?o de vendas no varejo da moda ? um problema complexo e um dos componentes essenciais da cadeia de suprimento, sendo utilizada tanto para previs?o de longo prazo quanto para a previs?o de curto prazo. A previs?o de longo prazo ? importante pois ? dif?cil, em termos de produ??o, enfrentar o desvio da demanda em um curto espa?o de tempo, ent?o a previs?o antecipada permite aumentar a capacidade de resposta da cadeia de suprimento. A previs?o de curto prazo ? importante para o acompanhamento da demanda, visando a adequa??o do n?vel de estoque. No varejo da moda a alta rotatividade, o curto ciclo de vida dos produtos e a consequente aus?ncia de dados hist?ricos dificulta a gera??o de previs?es precisas. Para lidar com esse problema, h? na literatura tr?s principais abordagens: estat?stica, baseada em intelig?ncia artificial e h?brida, que combina estat?stica e intelig?ncia artificial. Esta pesquisa prop?e um m?todo de previs?o de vendas em duas etapas: (1) previs?o de longo prazo, que pretende detectar diferentes grupos de produtos com ciclos de vida semelhantes, permitindo assim a identifica??o do comportamento m?dio de cada um dos grupos e (2) previs?o de curto prazo que busca associar os produtos novos nos grupos identificados na etapa de longo prazo e ajustar a curva de vendas levando em considera??o fatores conservadores, otimistas ou pessimistas. Al?m disso, nesta etapa ? poss?vel realizar a previs?o de reposi??o de itens. Como diferencial, o m?todo proposto utiliza a medida de dist?ncia Dynamic Time Warping, identificada na literatura como adequada para lidar com s?ries temporais. O m?todo ? testado utilizando dois conjuntos de dados reais de varejistas da moda, foram realizados dois experimentos, que demonstram a qualidade da contribui??o.
55

Multi-agent coordination: fluid-inspired and optimal control approaches

Kingston, Peter 03 April 2012 (has links)
Multiagent coordination problems arise in a variety of applications, from satellite constellations and formation flight, to air traffic control and unmanned vehicle teams. We investigate the coordination of mobile agents using two kinds of approaches. In the first, which takes its inspiration from fluid dynamics and algebraic topology, control authority is split between mobile agents and a network of static infrastructure nodes - like wireless base stations or air traffic control towers - and controllers are developed that distribute their computation throughout this network. In the second, we look at networks of interconnected mechanical systems, and develop novel optimal control algorithms, which involve the computation of optimal deformations of time- and output- spaces, to achieve approximate formation tracking. Finally, we investigate algorithms that optimize these controllers to meet subjective criteria of humans.
56

Improving process monitoring and modeling of batch-type plasma etching tools

Lu, Bo, active 21st century 01 September 2015 (has links)
Manufacturing equipments in semiconductor factories (fabs) provide abundant data and opportunities for data-driven process monitoring and modeling. In particular, virtual metrology (VM) is an active area of research. Traditional monitoring techniques using univariate statistical process control charts do not provide immediate feedback to quality excursions, hindering the implementation of fab-wide advanced process control initiatives. VM models or inferential sensors aim to bridge this gap by predicting of quality measurements instantaneously using tool fault detection and classification (FDC) sensor measurements. The existing research in the field of inferential sensor and VM has focused on comparing regressions algorithms to demonstrate their feasibility in various applications. However, two important areas, data pretreatment and post-deployment model maintenance, are usually neglected in these discussions. Since it is well known that the industrial data collected is of poor quality, and that the semiconductor processes undergo drifts and periodic disturbances, these two issues are the roadblocks in furthering the adoption of inferential sensors and VM models. In data pretreatment, batch data collected from FDC systems usually contain inconsistent trajectories of various durations. Most analysis techniques requires the data from all batches to be of same duration with similar trajectory patterns. These inconsistencies, if unresolved, will propagate into the developed model and cause challenges in interpreting the modeling results and degrade model performance. To address this issue, a Constrained selective Derivative Dynamic Time Warping (CsDTW) method was developed to perform automatic alignment of trajectories. CsDTW is designed to preserve the key features that characterizes each batch and can be solved efficiently in polynomial time. Variable selection after trajectory alignment is another topic that requires improvement. To this end, the proposed Moving Window Variable Importance in Projection (MW-VIP) method yields a more robust set of variables with demonstrably more long-term correlation with the predicted output. In model maintenance, model adaptation has been the standard solution for dealing with drifting processes. However, most case studies have already preprocessed the model update data offline. This is an implicit assumption that the adaptation data is free of faults and outliers, which is often not true for practical implementations. To this end, a moving window scheme using Total Projection to Latent Structure (T-PLS) decomposition screens incoming updates to separate the harmless process noise from the outliers that negatively affects the model. The integrated approach was demonstrated to be more robust. In addition, model adaptation is very inefficient when there are multiplicities in the process, multiplicities could occur due to process nonlinearity, switches in product grade, or different operating conditions. A growing structure multiple model system using local PLS and PCA models have been proposed to improve model performance around process conditions with multiplicity. The use of local PLS and PCA models allows the method to handle a much larger set of inputs and overcome several challenges in mixture model systems. In addition, fault detection sensitivities are also improved by using the multivariate monitoring statistics of these local PLS/PCA models. These proposed methods are tested on two plasma etch data sets provided by Texas Instruments. In addition, a proof of concept using virtual metrology in a controller performance assessment application was also tested.
57

Extraction de connaissances symboliques et relationnelles appliquée aux tracés manuscrits structurés en-ligne

Li, Jinpeng 23 October 2012 (has links) (PDF)
Notre travail porte sur l'extraction de connaissances sur des langages graphiques dont les symboles sont a priori inconnus. Nous formons l'hypothèse que l'observation d'une grande quantité de documents doit permettre de découvrir les symboles composant l'alphabet du langage considéré. La difficulté du problème réside dans la nature bidimensionnelle et manuscrite des langages graphiques étudiés. Nous nous plaçons dans le cadre de tracés en-ligne produit par des interfaces de saisie de type écrans tactiles, tableaux interactifs ou stylos électroniques. Le signal disponible est alors une trajectoire échantillonnée produisant une séquence de traits, eux-mêmes composés d'une séquence de points. Un symbole, élément de base de l'alphabet du langage, est donc composé d'un ensemble de traits possédant des propriétés structurelles et relationnelles spécifiques. L'extraction des symboles est réalisée par la découverte de sous-graphes répétitifs dans un graphe global modélisant les traits (noeuds) et leur relations spatiales (arcs) de l'ensemble des documents. Le principe de description de longueur minimum (MDL : Minimum Description Length) est mis en oeuvre pour choisir les meilleurs représentants du lexique des symboles. Ces travaux ont été validés sur deux bases expérimentales. La première est une base d'expressions mathématiques simples, la seconde représente des graphiques de type organigramme. Sur ces bases, nous pouvons évaluer la qualité des symboles extraits et comparer à la vérité terrain. Enfin, nous nous sommes intéressés à la réduction de la tâche d'annotation d'une base en considérant à la fois les problématiques de segmentation et d'étiquetage des différents traits.
58

Automatic speech segmentation with limited data / by D.R. van Niekerk

Van Niekerk, Daniel Rudolph January 2009 (has links)
The rapid development of corpus-based speech systems such as concatenative synthesis systems for under-resourced languages requires an efficient, consistent and accurate solution with regard to phonetic speech segmentation. Manual development of phonetically annotated corpora is a time consuming and expensive process which suffers from challenges regarding consistency and reproducibility, while automation of this process has only been satisfactorily demonstrated on large corpora of a select few languages by employing techniques requiring extensive and specialised resources. In this work we considered the problem of phonetic segmentation in the context of developing small prototypical speech synthesis corpora for new under-resourced languages. This was done through an empirical evaluation of existing segmentation techniques on typical speech corpora in three South African languages. In this process, the performance of these techniques were characterised under different data conditions and the efficient application of these techniques were investigated in order to improve the accuracy of resulting phonetic alignments. We found that the application of baseline speaker-specific Hidden Markov Models results in relatively robust and accurate alignments even under extremely limited data conditions and demonstrated how such models can be developed and applied efficiently in this context. The result is segmentation of sufficient quality for synthesis applications, with the quality of alignments comparable to manual segmentation efforts in this context. Finally, possibilities for further automated refinement of phonetic alignments were investigated and an efficient corpus development strategy was proposed with suggestions for further work in this direction. / Thesis (M.Ing. (Computer Engineering))--North-West University, Potchefstroom Campus, 2009.
59

Automatic speech segmentation with limited data / by D.R. van Niekerk

Van Niekerk, Daniel Rudolph January 2009 (has links)
The rapid development of corpus-based speech systems such as concatenative synthesis systems for under-resourced languages requires an efficient, consistent and accurate solution with regard to phonetic speech segmentation. Manual development of phonetically annotated corpora is a time consuming and expensive process which suffers from challenges regarding consistency and reproducibility, while automation of this process has only been satisfactorily demonstrated on large corpora of a select few languages by employing techniques requiring extensive and specialised resources. In this work we considered the problem of phonetic segmentation in the context of developing small prototypical speech synthesis corpora for new under-resourced languages. This was done through an empirical evaluation of existing segmentation techniques on typical speech corpora in three South African languages. In this process, the performance of these techniques were characterised under different data conditions and the efficient application of these techniques were investigated in order to improve the accuracy of resulting phonetic alignments. We found that the application of baseline speaker-specific Hidden Markov Models results in relatively robust and accurate alignments even under extremely limited data conditions and demonstrated how such models can be developed and applied efficiently in this context. The result is segmentation of sufficient quality for synthesis applications, with the quality of alignments comparable to manual segmentation efforts in this context. Finally, possibilities for further automated refinement of phonetic alignments were investigated and an efficient corpus development strategy was proposed with suggestions for further work in this direction. / Thesis (M.Ing. (Computer Engineering))--North-West University, Potchefstroom Campus, 2009.
60

Using dynamic time warping for multi-sensor fusion

Ko, Ming Hsiao January 2009 (has links)
Fusion is a fundamental human process that occurs in some form at all levels of sense organs such as visual and sound information received from eyes and ears respectively, to the highest levels of decision making such as our brain fuses visual and sound information to make decisions. Multi-sensor data fusion is concerned with gaining information from multiple sensors by fusing across raw data, features or decisions. The traditional frameworks for multi-sensor data fusion only concern fusion at specific points in time. However, many real world situations change over time. When the multi-sensor system is used for situation awareness, it is useful not only to know the state or event of the situation at a point in time, but also more importantly, to understand the causalities of those states or events changing over time. / Hence, we proposed a multi-agent framework for temporal fusion, which emphasises the time dimension of the fusion process, that is, fusion of the multi-sensor data or events derived over a period of time. The proposed multi-agent framework has three major layers: hardware, agents, and users. There are three different fusion architectures: centralized, hierarchical, and distributed, for organising the group of agents. The temporal fusion process of the proposed framework is elaborated by using the information graph. Finally, the core of the proposed temporal fusion framework – Dynamic Time Warping (DTW) temporal fusion agent is described in detail. / Fusing multisensory data over a period of time is a challenging task, since the data to be fused consists of complex sequences that are multi–dimensional, multimodal, interacting, and time–varying in nature. Additionally, performing temporal fusion efficiently in real–time is another challenge due to the large amount of data to be fused. To address these issues, we proposed the DTW temporal fusion agent that includes four major modules: data pre-processing, DTW recogniser, class templates, and decision making. The DTW recogniser is extended in various ways to deal with the variability of multimodal sequences acquired from multiple heterogeneous sensors, the problems of unknown start and end points, multimodal sequences of the same class that hence has different lengths locally and/or globally, and the challenges of online temporal fusion. / We evaluate the performance of the proposed DTW temporal fusion agent on two real world datasets: 1) accelerometer data acquired from performing two hand gestures, and 2) a benchmark dataset acquired from carrying a mobile device and performing pre-defined user scenarios. Performance results of the DTW based system are compared with those of a Hidden Markov Model (HMM) based system. The experimental results from both datasets demonstrate that the proposed DTW temporal fusion agent outperforms HMM based systems, and has the capability to perform online temporal fusion efficiently and accurately in real–time.

Page generated in 0.0647 seconds