• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 191
  • 42
  • 31
  • 20
  • 19
  • 14
  • 5
  • 3
  • 2
  • 2
  • 2
  • 2
  • 1
  • 1
  • 1
  • Tagged with
  • 393
  • 393
  • 292
  • 64
  • 46
  • 46
  • 45
  • 42
  • 40
  • 36
  • 36
  • 34
  • 34
  • 34
  • 34
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
381

Modèles de Markov à variables latentes : matrice de transition non-homogène et reformulation hiérarchique

Lemyre, Gabriel 01 1900 (has links)
Ce mémoire s’intéresse aux modèles de Markov à variables latentes, une famille de modèles dans laquelle une chaîne de Markov latente régit le comportement d’un processus stochastique observable à travers duquel transparaît une version bruitée de la chaîne cachée. Pouvant être vus comme une généralisation naturelle des modèles de mélange, ces processus stochastiques bivariés ont entre autres démontré leur faculté à capter les dynamiques variables de maintes séries chronologiques et, plus spécifiquement en finance, à reproduire la plupart des faits stylisés des rendements financiers. Nous nous intéressons en particulier aux chaînes de Markov à temps discret et à espace d’états fini, avec l’objectif d’étudier l’apport de leurs reformulations hiérarchiques et de la relaxation de l’hypothèse d’homogénéité de la matrice de transition à la qualité de l’ajustement aux données et des prévisions, ainsi qu’à la reproduction des faits stylisés. Nous présentons à cet effet deux structures hiérarchiques, la première permettant une nouvelle interprétation des relations entre les états de la chaîne, et la seconde permettant de surcroît une plus grande parcimonie dans la paramétrisation de la matrice de transition. Nous nous intéressons de plus à trois extensions non-homogènes, dont deux dépendent de variables observables et une dépend d’une autre variable latente. Nous analysons pour ces modèles la qualité de l’ajustement aux données et des prévisions sur la série des log-rendements du S&P 500 et du taux de change Canada-États-Unis (CADUSD). Nous illustrons de plus la capacité des modèles à reproduire les faits stylisés, et présentons une interprétation des paramètres estimés pour les modèles hiérarchiques et non-homogènes. Les résultats obtenus semblent en général confirmer l’apport potentiel de structures hiérarchiques et des modèles non-homogènes. Ces résultats semblent en particulier suggérer que l’incorporation de dynamiques non-homogènes aux modèles hiérarchiques permette de reproduire plus fidèlement les faits stylisés—même la lente décroissance de l’autocorrélation des rendements centrés en valeur absolue et au carré—et d’améliorer la qualité des prévisions obtenues, tout en conservant la possibilité d’interpréter les paramètres estimés. / This master’s thesis is centered on the Hidden Markov Models, a family of models in which an unobserved Markov chain dictactes the behaviour of an observable stochastic process through which a noisy version of the latent chain is observed. These bivariate stochastic processes that can be seen as a natural generalization of mixture models have shown their ability to capture the varying dynamics of many time series and, more specifically in finance, to reproduce the stylized facts of financial returns. In particular, we are interested in discrete-time Markov chains with finite state spaces, with the objective of studying the contribution of their hierarchical formulations and the relaxation of the homogeneity hypothesis for the transition matrix to the quality of the fit and predictions, as well as the capacity to reproduce the stylized facts. We therefore present two hierarchical structures, the first allowing for new interpretations of the relationships between states of the chain, and the second allowing for a more parsimonious parameterization of the transition matrix. We also present three non-homogeneous models, two of which have transition probabilities dependent on observed explanatory variables, and the third in which the probabilities depend on another latent variable. We first analyze the goodness of fit and the predictive power of our models on the series of log returns of the S&P 500 and the exchange rate between canadian and american currencies (CADUSD). We also illustrate their capacity to reproduce the stylized facts, and present interpretations of the estimated parameters for the hierarchical and non-homogeneous models. In general, our results seem to confirm the contribution of hierarchical and non-homogeneous models to these measures of performance. In particular, these results seem to suggest that the incorporation of non-homogeneous dynamics to a hierarchical structure may allow for a more faithful reproduction of the stylized facts—even the slow decay of the autocorrelation functions of squared and absolute returns—and better predictive power, while still allowing for the interpretation of the estimated parameters.
382

Rozpoznáváni standardních PILOT-CONTROLLER řídicích povelů v hlasové podobě / Voice recognition of standard PILOT-CONTROLLER control commands

Kufa, Tomáš January 2009 (has links)
The subject of this graduation thesis is an application of speech recognition into ATC commands. The selection of methods and approaches to automatic recognition of ATC commands rises from detailed air traffic studies. By the reason that there is not any definite solution in such extensive field like speech recognition, this diploma work is focused just on speech recognizer based on comparison with templates (DTW). This recognizor is in this thesis realized and compared with freely accessible HTK system from Cambrige University based on statistic methods making use of Hidden Markov models. The usage propriety of both methods is verified by practical testing and results evaluation.
383

Koncepty strojového učení pro kategorizaci objektů v obrazu / Machine Learning Concepts for Categorization of Objects in Images

Hubený, Marek January 2017 (has links)
This work is focused on objects and scenes recognition using machine learning and computer vision tools. Before the solution of this problem has been studied basic phases of the machine learning concept and statistical models with accent on their division into discriminative and generative method. Further, the Bag-of-words method and its modification have been investigated and described. In the practical part of this work, the implementation of the Bag-of-words method with the SVM classifier was created in the Matlab environment and the model was tested on various sets of publicly available images.
384

Contribution à la modélisation et au pronostic des défaillances d'une machine synchrone à aimants permanents / Contribution to the modelisation and failure prognosis in a synchrone permanent magnet motor

Ginzarly, Riham 26 September 2019 (has links)
L’objectif de ce travail est d’élaborer un modèle performant/précis de la machine électrique permettant de proposer une technique de pronostic. Dans cette thèse, nous commençons par un état de l’art sur les véhicules électriques hybrides (VHE), les différents types de machines électriques utilisées dans les VHE ainsi que les différents types de défauts pouvant survenir dans ces machines électriques. Nous identifions également les indicateurs de défauts appropriés aux différents défauts considérés. Ensuite, une synthèse de techniques de pronostic pouvant être appliquées est proposée. Le modèle à éléments finis électromagnétiques, thermiques et vibratoires (FEM) de la machine à aimants permanents est présenté. Le modèle est élaboré en fonctionnement normal et défaillant. Les types de défauts considérés sont : démagnétisation, court-circuit et excentricité. Une comparaison entre les deux approches analytique et FEM (méthode numérique) pour la modélisation de machines électromagnétiques est effectuée. Les indicateurs de défauts analysés pour l’extraction les plus pertinents utilisent les différents signaux mesurées suivants : le couple, la température ainsi que les signaux vibratoires en états sains et défectueux. L’approche de pronostic adoptée qui est le modèle de Markov caché (HMM) est développée. L'aspect technique de la méthode est présenté et le module du pronostic est formulé. La méthode de HMM est utilisée pour détecter et localiser les défauts à petites amplitudes. Une stratégie systématique a été développée. Le vieillissement de l’équipement de la machine, en particulier des éléments sensibles comme la bobine de stator et l’aimant permanent, est une question très importante pour le calcul du RUL (Remaining Useful Life). Une stratégie d’estimation pour le calcul RUL est présentée et discutée. La configuration en boucle fermée est très importante. Elle est adoptée par tous les systèmes de véhicules disponibles. Par conséquent, les mêmes étapes mentionnées précédemment s'appliquent également à une configuration en boucle fermée. Un modèle global où l’entrée du FEM de la machine provient de l’onduleur modélisé est élaboré. / The core of the work is to build an accurate model of the electrical machine where the prognostic technique is applied. In this thesis we started by a literature review on hybrid electric vehicles (HEV), the different types of electrical machine used in HEV’s and the different types of faults that may occur in those electrical machine. We also identify the useful monitoring parameters that are beneficial for those different types of faults. Then, a survey is presented where all the prognostic techniques that can be applied on this application are enumerated. The electromagnetic, thermal and vibration finite element model (FEM) of the permanent magnet machine is presented. The model is built at healthy operation and when a fault is integrated. The considered types of faults are:demagnetization, turn to turn short circuit and eccentricity. A confrontation between analytical and FEM (numerical method) for electromagnetic machine modeling is illustrated. Fault indicators where useful measured parameters forfault identification are recognized and useful features from the measured parameters are extracted; torque, temperature and vibration signal are elaborated for healthy and faulty states. The strategy of the adopted prognostic approach which is Hidden Markov Model (HMM) is explained. The technical aspect of the method is presented and the prognostic model is formulated. HMM is applied to detect and localize small scale fault small scale faults were where a systematic strategy is developed. The aging of the machine’s equipment,specially the sensitive ones that are the stator coil’s and the permanent magnet, is a very important matter for RUL calculation. An estimation strategy for RUL calculation is presented and discussed for those mentioned machine’s components. Closed loop configuration is very important; it is adopted by all available vehicle systems. Hence, the same previously mentioned steps are applied for a closed loop configuration too. A global model where the input of the machine’s FEM comes from the modeled inverter is built.
385

Economic evaluation of a new genetic risk score to prevent nephropathies in type-2 diabetic patients

Guinan, Kimberly 12 1900 (has links)
Un score de risque polygénique (SRP) a été mis au point pour permettre une prédiction précoce du risque de néphropathie chez les patients atteints de diabète de type-2 (DT2). Le but de cette étude était d’évaluer l’impact économique de l’implantation du SRP pour la prévention de la néphropathie chez les patients atteints du DT2, par rapport aux méthodes de dépistage habituelles au Canada. Tout d’abord, une revue systématique de la littérature a été effectuée pour examiner les évaluations économiques publiées sur le DT2 et la néphropathie. Les principales techniques de modélisation observées dans cette revue ont été utilisées pour réaliser une analyse coût-utilité à l’aide d’un modèle de Markov. Les états de santé du modèle étaient la pré-insuffisance rénale (pré-IR), l’IR et le décès. Les paramètres d’efficacité du modèle ont été basés sur les résultats de l’étude ADVANCE. Les analyses ont été menées selon une perspective du système de soins et une perspective sociétale. Sur un horizon temporel de la vie entière du patient, le SRP était une stratégie dominante par rapport aux méthodes de dépistage habituelles, selon les deux perspectives choisies. En effet, le SRP était moins coûteux et plus efficace en termes d’années de vie ajustée en fonction de la qualité, par rapport aux techniques de dépistage usuelles. Les analyses de sensibilité déterministe et probabiliste ont démontré que les résultats demeurent dominants dans la majorité des simulations. Cette évaluation économique démontre que l’adoption du SRP permettrait de réduire les coûts et d’améliorer la qualité de vie des patients. / The current screening method for diabetic nephropathy (DN) is based upon the detection of urinary albumin and the decline of estimated glomerular filtration rate, which occurs relatively late in the course of the disease. A polygenic risk score (PRS) was developed for early prediction of the risk for type 2 diabetes (T2D) patients who experience DN. The aim of this study was to assess the economic impact of the implementation of the PRS for the prevention of DN in T2D patients, compared to usual screening methods in Canada. First, a systematic literature review was conducted to examine all published economic evaluations in T2D and DN. The main trends in modelling technics obtained from this review were used to conduct a cost-utility analysis using a Markov model. Health states include pre-end-stage renal disease (Pre-ESRD), ESRD and death. Model efficacy parameters were based on prediction of outcome data by polygenic-risk testing of the ADVANCE trial. Analyses were conducted from Canadian healthcare and societal perspectives. Over a lifetime horizon, the PRS was a dominant strategy compared to usual screening methods, from both a healthcare system and societal perspective. In other words, the PRS was less expensive and more effective in terms of quality-adjusted life years compared to usual screening technics. Deterministic and probabilistic sensitivity analyses showed that results remained dominant in the majority of simulations. This economic evaluation demonstrates that the adoption of the PRS would not only be cost saving but would also help prevent ESRD and improve patients’ quality of life.
386

Transformation model selection by multiple hypotheses testing

Lehmann, Rüdiger January 2014 (has links)
Transformations between different geodetic reference frames are often performed such that first the transformation parameters are determined from control points. If in the first place we do not know which of the numerous transformation models is appropriate then we can set up a multiple hypotheses test. The paper extends the common method of testing transformation parameters for significance, to the case that also constraints for such parameters are tested. This provides more flexibility when setting up such a test. One can formulate a general model with a maximum number of transformation parameters and specialize it by adding constraints to those parameters, which need to be tested. The proper test statistic in a multiple test is shown to be either the extreme normalized or the extreme studentized Lagrange multiplier. They are shown to perform superior to the more intuitive test statistics derived from misclosures. It is shown how model selection by multiple hypotheses testing relates to the use of information criteria like AICc and Mallows’ Cp, which are based on an information theoretic approach. Nevertheless, whenever comparable, the results of an exemplary computation almost coincide.
387

GPS-Free UAV Geo-Localization Using a Reference 3D Database

Karlsson, Justus January 2022 (has links)
The goal of this thesis has been global geolocalization using only visual input and a 3D database for reference. In recent years Convolutional Neural Networks (CNNs) have seen huge success in the task of classifying images. The flattened tensors at the final layers of a CNN can be viewed as vectors describing different input image features. Two networks were trained so that satellite and aerial images taken from different views of the same location had feature vectors that were similar. The networks were also trained so that images taken from different locations had different feature vectors. After training, the position of a given aerial image can then be estimated by finding the satellite image with a feature vector that is the most similar to that of the aerial image.  A previous method called Where-CNN was used as a baseline model. Batch-Hard triplet loss, the Adam optimizer, and a different CNN backbone were tested as possible augmentations to this method. The models were trained on 2640 different locations in Linköping and Norrköping. The models were then tested on a sequence of 4411 query images along a path in Jönköping. The search region had 1449 different locations constituting a total area of 24km2.  In Top-1% accuracy, there was a significant improvement over the baseline, increasing from 61.62% accuracy to 88.62%. The environment was modeled as a Hidden Markov Model to filter the sequence of guesses. The Viterbi algorithm was then used to find the most probable path. This filtering procedure reduced the average error along the path from 2328.0 m to just 264.4 m for the best model. Here the baseline had an average error of 563.0 m after filtering.  A few different 3D methods were also tested. One drawback was that no pretrained weights existed for these models, as opposed to the 2D models, which were pretrained on the ImageNet dataset. The best 3D model achieved a Top-1% accuracy of 70.41%. It should be noted that the best 2D model without using any pretraining achieved a lower Top-1% accuracy of 49.38%. In addition, a 3D method for efficiently doing convolution on sparse 3D data was presented. Compared to the straight-forward method, it was almost 2.5 times faster while still having comparable accuracy at individual query prediction.  While there was a significant improvement over the baseline, it was not significant enough to provide reliable and accurate localization for individual images. For global navigation, using the entire Earth as search space, the information in a 2D image might not be enough to be uniquely identifiable. However, the 3D CNN techniques tested did not improve the results of the pretrained 2D models. The use of more data and experimentation with different 3D CNN architectures is a direction in which further research would be exciting.
388

PROGRAM ANOMALY DETECTION FOR INTERNET OF THINGS

Akash Agarwal (13114362) 01 September 2022 (has links)
<p>Program anomaly detection — modeling normal program executions to detect deviations at runtime as cues for possible exploits — has become a popular approach for software security. To leverage high performance modeling and complete tracing, existing techniques however focus on subsets of applications, e.g., on system calls or calls to predefined libraries. Due to limited scope, it is insufficient to detect subtle control-oriented and data-oriented attacks that introduces new illegal call relationships at the application level. Also such techniques are hard to apply on devices that lack a clear separation between OS and the application layer. This dissertation advances the design and implementation of program anomaly detection techniques by providing application context for library and system calls making it powerful for detecting advanced attacks targeted at manipulating intra- and inter-procedural control-flow and decision variables. </p> <p><br></p> <p>This dissertation has two main parts. The first part describes a statically initialized generic calling context program anomaly detection technique LANCET based on Hidden Markov Modeling to provide security against control-oriented attacks at program runtime. It also establishes an efficient execution tracing mechanism facilitated through source code instrumentation of applications. The second part describes a program anomaly detection framework EDISON to provide security against data-oriented attacks using graph representation learning and language models for intra and inter-procedural behavioral modeling respectively.</p> <p><br> This dissertation makes three high-level contributions. First, the concise descriptions demonstrates the design, implementation and extensive evaluation of an aggregation-based anomaly detection technique using fine-grained generic calling context-sensitive modeling that allows for scaling the detection over entire applications. Second, the precise descriptions show the design, implementation, and extensive evaluation of a detection technique that maps runtime traces to the program’s control-flow graph and leverages graphical feature representation to learn dynamic program behavior. Finally, this dissertation provides details and experience for designing program anomaly detection frameworks from high-level concepts, design, to low-level implementation techniques.</p>
389

Unsupervised Detection of Interictal Epileptiform Discharges in Routine Scalp EEG : Machine Learning Assisted Epilepsy Diagnosis

Shao, Shuai January 2023 (has links)
Epilepsy affects more than 50 million people and is one of the most prevalent neurological disorders and has a high impact on the quality of life of those suffering from it. However, 70% of epilepsy patients can live seizure free with proper diagnosis and treatment. Patients are evaluated using scalp EEG recordings which is cheap and non-invasive. Diagnostic yield is however low and qualified personnel need to process large amounts of data in order to accurately assess patients. MindReader is an unsupervised classifier which detects spectral anomalies and generates a hypothesis of the underlying patient state over time. The aim is to highlight abnormal, potentially epileptiform states, which could expedite analysis of patients and let qualified personnel attest the results. It was used to evaluate 95 scalp EEG recordings from healthy adults and adult patients with epilepsy. Interictal Epileptiform discharges (IED) occurring in the samples had been retroactively annotated, along with the patient state and maneuvers performed by personnel, to enable characterization of the classifier’s detection performance. The performance was slightly worse than previous benchmarks on pediatric scalp EEG recordings, with a 7% and 33% drop in specificity and sensitivity, respectively. Electrode positioning and partial spatial extent of events saw notable impact on performance. However, no correlation between annotated disturbances and reduction in performance could be found. Additional explorative analysis was performed on serialized intermediate data to evaluate the analysis design. Hyperparameters and electrode montage options were exposed to optimize for the average Mathew’s correlation coefficient (MCC) per electrode per patient, on a subset of the patients with epilepsy. An increased window length and lowered amount of training along with an common average montage proved most successful. The Euclidean distance of cumulative spectra (ECS), a metric suitable for spectral analysis, and homologous L2 and L1 loss function were implemented, of which the ECS further improved the average performance for all samples. Four additional analyses, featuring new time-frequency transforms and multichannel convolutional autoencoders were evaluated and an analysis using the continuous wavelet transform (CWT) and a convolutional autoencoder (CNN) performed the best, with an average MCC score of 0.19 and 56.9% sensitivity with approximately 13.9 false positives per minute.
390

A Probabilistic Formulation of Keyword Spotting

Puigcerver I Pérez, Joan 18 February 2019 (has links)
[ES] La detección de palabras clave (Keyword Spotting, en inglés), aplicada a documentos de texto manuscrito, tiene como objetivo recuperar los documentos, o partes de ellos, que sean relevantes para una cierta consulta (query, en inglés), indicada por el usuario, entre una gran colección de documentos. La temática ha recogido un gran interés en los últimos 20 años entre investigadores en Reconocimiento de Formas (Pattern Recognition), así como bibliotecas y archivos digitales. Esta tesis, en primer lugar, define el objetivo de la detección de palabras clave a partir de una perspectiva basada en la Teoría de la Decisión y una formulación probabilística adecuada. Más concretamente, la detección de palabras clave se presenta como un caso particular de Recuperación de la Información (Information Retrieval), donde el contenido de los documentos es desconocido, pero puede ser modelado mediante una distribución de probabilidad. Además, la tesis también demuestra que, bajo las distribuciones de probabilidad correctas, el marco de trabajo desarrollada conduce a la solución óptima del problema, según múltiples medidas de evaluación utilizadas tradicionalmente en el campo. Más tarde, se utilizan distintos modelos estadísticos para representar las distribuciones necesarias: Redes Neuronales Recurrentes o Modelos Ocultos de Markov. Los parámetros de estos son estimados a partir de datos de entrenamiento, y las respectivas distribuciones son representadas mediante Transductores de Estados Finitos con Pesos (Weighted Finite State Transducers). Con el objetivo de hacer que el marco de trabajo sea práctico en grandes colecciones de documentos, se presentan distintos algoritmos para construir índices de palabras a partir de modelos probabilísticos, basados tanto en un léxico cerrado como abierto. Estos índices son muy similares a los utilizados por los motores de búsqueda tradicionales. Además, se estudia la relación que hay entre la formulación probabilística presentada y otros métodos de gran influencia en el campo de la detección de palabras clave, destacando cuáles son las limitaciones de los segundos. Finalmente, todas la aportaciones se evalúan de forma experimental, no sólo utilizando pruebas académicas estándar, sino también en colecciones con decenas de miles de páginas provenientes de manuscritos históricos. Los resultados muestran que el marco de trabajo presentado permite construir sistemas de detección de palabras clave muy rápidos y precisos, con una sólida base teórica. / [CA] La detecció de paraules clau (Keyword Spotting, en anglès), aplicada a documents de text manuscrit, té com a objectiu recuperar els documents, o parts d'ells, que siguen rellevants per a una certa consulta (query, en anglès), indicada per l'usuari, dintre d'una gran col·lecció de documents. La temàtica ha recollit un gran interés en els últims 20 anys entre investigadors en Reconeixement de Formes (Pattern Recognition), així com biblioteques i arxius digitals. Aquesta tesi defineix l'objectiu de la detecció de paraules claus a partir d'una perspectiva basada en la Teoria de la Decisió i una formulació probabilística adequada. Més concretament, la detecció de paraules clau es presenta com un cas concret de Recuperació de la Informació (Information Retrieval), on el contingut dels documents és desconegut, però pot ser modelat mitjançant una distribució de probabilitat. A més, la tesi també demostra que, sota les distribucions de probabilitat correctes, el marc de treball desenvolupat condueix a la solució òptima del problema, segons diverses mesures d'avaluació utilitzades tradicionalment en el camp. Després, diferents models estadístics s'utilitzen per representar les distribucions necessàries: Xarxes Neuronal Recurrents i Models Ocults de Markov. Els paràmetres d'aquests són estimats a partir de dades d'entrenament, i les corresponents distribucions són representades mitjançant Transductors d'Estats Finits amb Pesos (Weighted Finite State Transducers). Amb l'objectiu de fer el marc de treball útil per a grans col·leccions de documents, es presenten distints algorismes per construir índexs de paraules a partir dels models probabilístics, tan basats en un lèxic tancat com en un obert. Aquests índexs són molt semblants als utilitzats per motors de cerca tradicionals. A més a més, s'estudia la relació que hi ha entre la formulació probabilística presentada i altres mètodes de gran influència en el camp de la detecció de paraules clau, destacant algunes limitacions dels segons. Finalment, totes les aportacions s'avaluen de forma experimental, no sols utilitzant proves acadèmics estàndard, sinó també en col·leccions amb desenes de milers de pàgines provinents de manuscrits històrics. Els resultats mostren que el marc de treball presentat permet construir sistemes de detecció de paraules clau molt acurats i ràpids, amb una sòlida base teòrica. / [EN] Keyword Spotting, applied to handwritten text documents, aims to retrieve the documents, or parts of them, that are relevant for a query, given by the user, within a large collection of documents. The topic has gained a large interest in the last 20 years among Pattern Recognition researchers, as well as digital libraries and archives. This thesis, first defines the goal of Keyword Spotting from a Decision Theory perspective. Then, the problem is tackled following a probabilistic formulation. More precisely, Keyword Spotting is presented as a particular instance of Information Retrieval, where the content of the documents is unknown, but can be modeled by a probability distribution. In addition, the thesis also proves that, under the correct probability distributions, the framework provides the optimal solution, under many of the evaluation measures traditionally used in the field. Later, different statistical models are used to represent the probability distribution over the content of the documents. These models, Hidden Markov Models or Recurrent Neural Networks, are estimated from training data, and the corresponding distributions over the transcripts of the images can be efficiently represented using Weighted Finite State Transducers. In order to make the framework practical for large collections of documents, this thesis presents several algorithms to build probabilistic word indexes, using both lexicon-based and lexicon-free models. These indexes are very similar to the ones used by traditional search engines. Furthermore, we study the relationship between the presented formulation and other seminal approaches in the field of Keyword Spotting, highlighting some limitations of the latter. Finally, all the contributions are evaluated experimentally, not only on standard academic benchmarks, but also on collections including tens of thousands of pages of historical manuscripts. The results show that the proposed framework and algorithms allow to build very accurate and very fast Keyword Spotting systems, with a solid underlying theory. / Puigcerver I Pérez, J. (2018). A Probabilistic Formulation of Keyword Spotting [Tesis doctoral no publicada]. Universitat Politècnica de València. https://doi.org/10.4995/Thesis/10251/116834

Page generated in 0.0421 seconds