• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 39
  • 11
  • 9
  • 8
  • 3
  • 2
  • 2
  • 1
  • 1
  • 1
  • 1
  • Tagged with
  • 93
  • 93
  • 83
  • 21
  • 18
  • 18
  • 16
  • 14
  • 13
  • 13
  • 13
  • 12
  • 12
  • 10
  • 10
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
31

Understanding Traffic Cruising Causation : Via Parking Data Enhancement

Jasarevic, Mirza January 2021 (has links)
Background. Some computer scientists have recently pointed out that it may be more effective for the computer science community to focus more on data preparation for performance improvements, rather than exclusively comparing modeling techniques.Testing how useful this shift in focus is, this paper chooses a particular data extraction technique to examine the differences in data model performance. Objectives. Five recent (2016-2020) studies concerning modeling parking congestion have used a rationalized approach to feature extraction rather than a measured approach. Their main focus was to select modeling techniques to find the best performance. Instead, this study picks a feature common to them all and attempts to improve it. It is then compared to the performance of the feature when it retains the state it had in the related studies. Weights are applied to the selected features, and altered, rather than using several modeling techniques. Specifically in the case of time series parking data, as the opportunity appeared in that sector. Apart from this, the reusability of the data is also gauged. Methods. An experimental case study is designed in three parts. The first tests the importance of weighted sum configurations relative to drivers' expectations. The second analyzes how much data can be recycled from the real data, and whether spatial or temporal comparisons are better for data synthesis of parking data. The third part compares the performance of the best configuration against the default configuration using k-means clustering algorithm and dynamic time warping distance. Results. The experimental results show performance improvements on all levels, and increasing improvement as the sample sizes grow, up to 9% average improvement per category, 6.2% for the entire city. The popularity of a parking lot turned out to be as important as occupancy rates(50% importance each), while volatility was obstructive. A few months were recyclable, and a few small parking lots could replace each other's datasets. Temporal aspects turned out to be better for parking data simulations than spatial aspects. Conclusions. The results support the data scientists' belief that quality- and quantity improvements of data are more important than creating more, new types of models. The score can be used as a better metric for parking congestion rates, for both drivers and managers. It can be employed in the public sphere under the condition that higher quality, richer data are provided.
32

Klasifikace srdečních cyklů / Classification of cardiac cycles

Lorenc, Patrik January 2013 (has links)
This work deals with the classification of cardiac cycles, which uses a method of dynamic time warping and cluster analysis. Method of dynamic time warping is among the elderly, but for its simplicity compared to others is still very much used, and also achieved good results in practice. Cluster analysis is used in many fields such as marketing or just for biological signals. The aim of this work is a general introduction to the ECG signal and the method and implementation of dynamic time warping algorithm. Subsequently, cluster analysis and finally the creation of the user interface for the algorithms.
33

Detekce klíčových slov v mluvené řeči / Keyword spotting

Zemánek, Tomáš January 2011 (has links)
This thesis is aimed on design keyword detector. The work contains a description of the methods that are used for these purposes and design of algorithm for keyword detection. The proposed detector is based on the method of DTW (Dynamic Time Warping). Analysis of the problem was performed on the module programmed in ANSI C, which was created within the thesis. The results of the detector were evaluated using the metrics WER (word error rate) and AUC (area under curve).
34

Description de contenu vidéo : mouvements et élasticité temporelle / Description of video content : motion and temporal elasticity

Blanc, Katy 17 December 2018 (has links)
La reconnaissance en vidéo atteint de meilleures performances ces dernières années, notamment grâce à l'amélioration des réseaux de neurones profonds sur les images. Pourtant l'explosion des taux de reconnaissance en images ne s'est pas directement répercuté sur les taux en reconnaissance vidéo. Cela est dû à cette dimension supplémentaire qu'est le temps et dont il est encore difficile d'extraire une description robuste. Les réseaux de neurones récurrents introduisent une temporalité mais ils ont une mémoire limitée dans le temps. Les méthodes de description vidéo de l'état de l'art gèrent généralement le temps comme une dimension spatiale supplémentaire et la combinaison de plusieurs méthodes de description vidéo apportent les meilleures performances actuelles. Or la dimension temporelle possède une élasticité propre, différente des dimensions spatiales. En effet, la dimension temporelle peut être déformée localement : une dilatation partielle provoquera un ralentissement visuel de la vidéo sans en changer la compréhension, à l'inverse d'une dilatation spatiale sur une image qui modifierait les proportions des objets. On peut donc espérer améliorer encore la classification de contenu vidéo par la conception d'une description invariante aux changements de vitesse. Cette thèse porte sur la problématique d'une description robuste de vidéo en considérant l'élasticité de la dimension temporelle sous trois angles différents. Dans un premier temps, nous avons décrit localement et explicitement les informations de mouvements. Des singularités sont détectées sur le flot optique, puis traquées et agrégées dans une chaîne pour décrire des portions de vidéos. Nous avons utilisé cette description sur du contenu sportif. Puis nous avons extrait des descriptions globales implicites grâce aux décompositions tensorielles. Les tenseurs permettent de considérer une vidéo comme un tableau de données multi-dimensionnelles. Les descriptions extraites sont évaluées dans une tache de classification. Pour finir, nous avons étudié les méthodes de normalisation de la dimension temporelle. Nous avons utilisé les méthodes de déformations temporelles dynamiques des séquences. Nous avons montré que cette normalisation aide à une meilleure classification. / Video recognition gain in performance during the last years, especially due to the improvement in the deep learning performances on images. However the jump in recognition rate on images does not directly impact the recognition rate on videos. This limitation is certainly due to this added dimension, the time, on which a robust description is still hard to extract. The recurrent neural networks introduce temporality but they have a limited memory. State of the art methods for video description usually handle time as a spatial dimension and the combination of video description methods reach the current best accuracies. However the temporal dimension has its own elasticity, different from the spatial dimensions. Indeed, the temporal dimension of a video can be locally deformed: a partial dilatation produces a visual slow down during the video, without changing the understanding, in contrast with a spatial dilatation on an image which will modify the proportions of the shown objects. We can thus expect to improve the video content classification by creating an invariant description to these speed changes. This thesis focus on the question of a robust video description considering the elasticity of the temporal dimension under three different angles. First, we have locally and explicitly described the motion content. Singularities are detected in the optical flow, then tracked along the time axis and organized in chain to describe video part. We have used this description on sport content. Then we have extracted global and implicit description thanks to tensor decompositions. Tensor enables to consider a video as a multi-dimensional data table. The extracted description are evaluated in a classification task. Finally, we have studied speed normalization method thanks to Dynamical Time Warping methods on series. We have showed that this normalization improve the classification rates.
35

Agriculture monitoring using satellite data

Erik, Graff January 2021 (has links)
As technology advances, the possibility of using satellite data and observations to aid inagricultural activities comes closer to reality. Swedish farmers can apply for subsidies for their land in which crop management and animal grazing occurs, and every year thousands of manual follow-up checks are conducted by Svenska Jordbruksverket (Swedish Board of Agriculture) to validate the farmers’ claims to financial aid. RISE (Research Institutes of Sweden) is currently researching a replacement for the manual follow-up checks using an automated process with optical satellite observations from primarily the ESA-made satellite constellation Sentinel-2, and secondarily the radar observations of the Sentinel-1 constellation. The optical observations from Sentinel-2 are greatly hindered by the presence of weather on the Earth’s atmosphere and lack of sunlight, but the radar-based observations of Sentinel-1 are able to penetrate any weather conditions entirely independently from sunlight. By using the optical index NDVI (Normalized Difference Vegetation Index) which is strongly correlated with plant chlorophyll, and the radar index RVI (Radar Vegetation Index), classifications on animal grazing activities are sought to be made. Dynamic Time Warping and hierarchical clustering are used to analyse and attempt to make classifications on the two selected datasets of sizes 959 and 20 fields. Five experiments were conducted to analyse the observational data from mainly Sentinel-2, but also Sentinel-1. The results were inconclusive and were unable to perform successful classifications primarily on the 959 fields large dataset. An indication is given in one of the experiments, performed on the smaller dataset of 20 fields, that classification is indeed possible by using mean valued NDVI time series. However, it is difficult to draw conclusions due to the small size of the 20 fields large dataset. To validate any possible methods classification a larger dataset must be used.
36

Novel Misfit Functions for Full-waveform Inversion

Chen, Fuqiang 04 1900 (has links)
The main objective of this thesis is to develop novel misfit functions for full-waveform inversion such that (a) the estimation of the long-wavelength model will less likely stagnate in spurious local minima and (b) the inversion is immune to wavelet inaccuracy. First, I investigate the pros and cons of misfit functions based on optimal transport theory to indicate the traveltime discrepancy for seismic data. Even though the mathematically well-defined optimal transport theory is robust to highlight the traveltime difference between two probability distributions, it becomes restricted as applied to seismic data mainly because the seismic data are not probability distribution functions. We then develop a misfit function combining the local cross-correlation and dynamic time warping. This combination enables the proposed misfit automatically identify arrivals associated with a phase shift. Numerical and field data examples demonstrate its robustness for early arrivals and limitations for later arrivals.%, which means that a proper pre-processing step is still required. Next, we introduce differentiable dynamic time warping distance as the misfit function highlighting the traveltime discrepancy without non-trivial human intervention. Compared to the conventional warping distance, the differentiable version retains the property of representing the traveltime difference; moreover, it can eliminate abrupt changes in the adjoint source, which helps full-waveform inversion converge to geologically relevant estimates. Finally, we develop a misfit function entitled the deconvolutional double-difference measurement. The new misfit measures the first difference by deconvolution rather than cross-correlation. We also present the derivation of the adjoint source with the new misfit function. Numerical examples and mathematical proof demonstrate that this modification makes full-waveform inversion with the deconvolutional double-difference measurement immune to wavelet inaccuracy.
37

Automatisk metod för läs-screening i lågstadiet / Automated screening method for reading difficulties in lower school

Lindmark, Ada, Vos, Christian January 2020 (has links)
En av de mest centrala delarna av undervisningen i lågstadiet fokuserar på svenskämnet och speciellt elevers läsförmåga. Trots obligatoriska screeningmoment och nationella bedömningsstöd uppfattar personal inom skola att kartläggning av läskunskaper är tidskrävande, komplext och subjektivt till följd av dess manuella format. Denna studie undersöker hur en automatiserad läs-screening kan implementeras som ett kompletterande verktyg i lågstadiet genom forced alignment. Syftet är att fastställa om ett program är tillräckligt pålitligt för att underlätta screeningprocessen och tidigare kunna identifiera elever med lässvårigheter. Genom intervjuer med personer som jobbar inom skola och framtagandet av en prototyp har resultaten analyserats. Studiens slutsats blev att det finns fördelar med att komplettera den manuella läs-screeningen med ett automatiskt verktyg, men att det inte går att dra några slutsatser om huruvida verktyget går att implementera i samtliga skolor med positiva effekter. Det kan vara mer fördelaktigt att använda sig av ett automatiskt verktyg i mellanstadiet, men till följd av låg svarsfrekvens vid intervjuer kan detta inte fastställas. / One of the most central parts of the education in lower school is Swedish and especially reading skills. Even though there are mandatory screening moments and national evaluation support, employees in schools think of the screenings as a timeconsuming, complex, and subjective process due to their manual format. This study investigates how an automatized reading screening can be implemented in lower school as a complemental tool by forced alignment. The purpose is to confirm whether a program is reliable enough to make it easier to perform reading screening and earlier identify pupils with reading difficulties. By interviewing employees in schools and creating a prototype, the results have been analyzed. The study concluded was that there are advantages of having an automatic complementary tool in the reading screening process. Still, there was not possible to make any conclusions about whether the tool can be implemented in all schools with positive effects. It could be more beneficial to use an automatic tool in middle school, but due to low answering frequency, this cannot be established.
38

Function Registration from a Bayesian Perspective

Lu, Yi January 2017 (has links)
No description available.
39

Development of Real-Time Predictive Analytics Tools for Small Water Distribution System

Woo, Hyoungmin January 2017 (has links)
No description available.
40

Design of Keyword Spotting System Based on Segmental Time Warping of Quantized Features

Karmacharya, Piush January 2012 (has links)
Keyword Spotting in general means identifying a keyword in a verbal or written document. In this research a novel approach in designing a simple spoken Keyword Spotting/Recognition system based on Template Matching is proposed, which is different from the Hidden Markov Model based systems that are most widely used today. The system can be used equally efficiently on any language as it does not rely on an underlying language model or grammatical constraints. The proposed method for keyword spotting is based on a modified version of classical Dynamic Time Warping which has been a primary method for measuring the similarity between two sequences varying in time. For processing, a speech signal is divided into small stationary frames. Each frame is represented in terms of a quantized feature vector. Both the keyword and the  speech  utterance  are  represented  in  terms  of  1‐dimensional  codebook  indices.  The  utterance is divided into segments and the warped distance is computed for each segment and compared against the test keyword. A distortion score for each segment is computed as likelihood measure of the keyword. The proposed algorithm is designed to take advantage of multiple instances of test keyword (if available) by merging the score for all keywords used.   The training method for the proposed system is completely unsupervised, i.e., it requires neither a language model nor phoneme model for keyword spotting. Prior unsupervised training algorithms were based on computing Gaussian Posteriorgrams making the training process complex but the proposed algorithm requires minimal training data and the system can also be trained to perform on a different environment (language, noise level, recording medium etc.) by  re‐training the original cluster on additional data.  Techniques for designing a model keyword from multiple instances of the test keyword are discussed. System performance over variations of different parameters like number of clusters, number of instance of keyword available, etc were studied in order to optimize the speed and accuracy of the system. The system performance was evaluated for fourteen different keywords from the Call - Home and the Switchboard speech corpus. Results varied for different keywords and a maximum accuracy of 90% was obtained which is comparable to other methods using the same time warping algorithms on Gaussian Posteriorgrams. Results are compared for different parameters variation with suggestion of possible improvements. / Electrical and Computer Engineering

Page generated in 0.075 seconds