• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 76
  • 5
  • 4
  • 4
  • 3
  • 1
  • 1
  • Tagged with
  • 120
  • 120
  • 120
  • 67
  • 51
  • 44
  • 23
  • 22
  • 21
  • 20
  • 19
  • 17
  • 16
  • 16
  • 15
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
61

Approches jointes texte/image pour la compréhension multimodale de documents / Text/image joint approaches for multimodal understanding of documents

Delecraz, Sébastien 10 December 2018 (has links)
Les mécanismes de compréhension chez l'être humain sont par essence multimodaux. Comprendre le monde qui l'entoure revient chez l'être humain à fusionner l'information issue de l'ensemble de ses récepteurs sensoriels. La plupart des documents utilisés en traitement automatique de l'information sont multimodaux. Par exemple, du texte et des images dans des documents textuels ou des images et du son dans des documents vidéo. Cependant, les traitements qui leurs sont appliqués sont le plus souvent monomodaux. Le but de cette thèse est de proposer des traitements joints s'appliquant principalement au texte et à l'image pour le traitement de documents multimodaux à travers deux études : l'une portant sur la fusion multimodale pour la reconnaissance du rôle du locuteur dans des émissions télévisuelles, l'autre portant sur la complémentarité des modalités pour une tâche d'analyse linguistique sur des corpus d'images avec légendes. Pour la première étude nous nous intéressons à l'analyse de documents audiovisuels provenant de chaînes d'information télévisuelle. Nous proposons une approche utilisant des réseaux de neurones profonds pour la création d'une représentation jointe multimodale pour les représentations et la fusion des modalités. Dans la seconde partie de cette thèse nous nous intéressons aux approches permettant d'utiliser plusieurs sources d'informations multimodales pour une tâche monomodale de traitement automatique du langage, afin d'étudier leur complémentarité. Nous proposons un système complet de correction de rattachements prépositionnels utilisant de l'information visuelle, entraîné sur un corpus multimodal d'images avec légendes. / The human faculties of understanding are essentially multimodal. To understand the world around them, human beings fuse the information coming from all of their sensory receptors. Most of the documents used in automatic information processing contain multimodal information, for example text and image in textual documents or image and sound in video documents, however the processings used are most often monomodal. The aim of this thesis is to propose joint processes applying mainly to text and image for the processing of multimodal documents through two studies: one on multimodal fusion for the speaker role recognition in television broadcasts, the other on the complementarity of modalities for a task of linguistic analysis on corpora of images with captions. In the first part of this study, we interested in audiovisual documents analysis from news television channels. We propose an approach that uses in particular deep neural networks for representation and fusion of modalities. In the second part of this thesis, we are interested in approaches allowing to use several sources of multimodal information for a monomodal task of natural language processing in order to study their complementarity. We propose a complete system of correction of prepositional attachments using visual information, trained on a multimodal corpus of images with captions.
62

Structural priors in deep neural networks

Ioannou, Yani Andrew January 2018 (has links)
Deep learning has in recent years come to dominate the previously separate fields of research in machine learning, computer vision, natural language understanding and speech recognition. Despite breakthroughs in training deep networks, there remains a lack of understanding of both the optimization and structure of deep networks. The approach advocated by many researchers in the field has been to train monolithic networks with excess complexity, and strong regularization --- an approach that leaves much to desire in efficiency. Instead we propose that carefully designing networks in consideration of our prior knowledge of the task and learned representation can improve the memory and compute efficiency of state-of-the art networks, and even improve generalization --- what we propose to denote as structural priors. We present two such novel structural priors for convolutional neural networks, and evaluate them in state-of-the-art image classification CNN architectures. The first of these methods proposes to exploit our knowledge of the low-rank nature of most filters learned for natural images by structuring a deep network to learn a collection of mostly small, low-rank, filters. The second addresses the filter/channel extents of convolutional filters, by learning filters with limited channel extents. The size of these channel-wise basis filters increases with the depth of the model, giving a novel sparse connection structure that resembles a tree root. Both methods are found to improve the generalization of these architectures while also decreasing the size and increasing the efficiency of their training and test-time computation. Finally, we present work towards conditional computation in deep neural networks, moving towards a method of automatically learning structural priors in deep networks. We propose a new discriminative learning model, conditional networks, that jointly exploit the accurate representation learning capabilities of deep neural networks with the efficient conditional computation of decision trees. Conditional networks yield smaller models, and offer test-time flexibility in the trade-off of computation vs. accuracy.
63

Suprasegmental representations for the modeling of fundamental frequency in statistical parametric speech synthesis

Fonseca De Sam Bento Ribeiro, Manuel January 2018 (has links)
Statistical parametric speech synthesis (SPSS) has seen improvements over recent years, especially in terms of intelligibility. Synthetic speech is often clear and understandable, but it can also be bland and monotonous. Proper generation of natural speech prosody is still a largely unsolved problem. This is relevant especially in the context of expressive audiobook speech synthesis, where speech is expected to be fluid and captivating. In general, prosody can be seen as a layer that is superimposed on the segmental (phone) sequence. Listeners can perceive the same melody or rhythm in different utterances, and the same segmental sequence can be uttered with a different prosodic layer to convey a different message. For this reason, prosody is commonly accepted to be inherently suprasegmental. It is governed by longer units within the utterance (e.g. syllables, words, phrases) and beyond the utterance (e.g. discourse). However, common techniques for the modeling of speech prosody - and speech in general - operate mainly on very short intervals, either at the state or frame level, in both hidden Markov model (HMM) and deep neural network (DNN) based speech synthesis. This thesis presents contributions supporting the claim that stronger representations of suprasegmental variation are essential for the natural generation of fundamental frequency for statistical parametric speech synthesis. We conceptualize the problem by dividing it into three sub-problems: (1) representations of acoustic signals, (2) representations of linguistic contexts, and (3) the mapping of one representation to another. The contributions of this thesis provide novel methods and insights relating to these three sub-problems. In terms of sub-problem 1, we propose a multi-level representation of f0 using the continuous wavelet transform and the discrete cosine transform, as well as a wavelet-based decomposition strategy that is linguistically and perceptually motivated. In terms of sub-problem 2, we investigate additional linguistic features such as text-derived word embeddings and syllable bag-of-phones and we propose a novel method for learning word vector representations based on acoustic counts. Finally, considering sub-problem 3, insights are given regarding hierarchical models such as parallel and cascaded deep neural networks.
64

Learning to Map the Visual and Auditory World

Salem, Tawfiq 01 January 2019 (has links)
The appearance of the world varies dramatically not only from place to place but also from hour to hour and month to month. Billions of images that capture this complex relationship are uploaded to social-media websites every day and often are associated with precise time and location metadata. This rich source of data can be beneficial to improve our understanding of the globe. In this work, we propose a general framework that uses these publicly available images for constructing dense maps of different ground-level attributes from overhead imagery. In particular, we use well-defined probabilistic models and a weakly-supervised, multi-task training strategy to provide an estimate of the expected visual and auditory ground-level attributes consisting of the type of scenes, objects, and sounds a person can experience at a location. Through a large-scale evaluation on real data, we show that our learned models can be used for applications including mapping, image localization, image retrieval, and metadata verification.
65

[en] ENHANCEMENT AND CONTINUOUS SPEECH RECOGNITION IN ADVERSE ENVIRONMENTS / [pt] REALCE E RECONHECIMENTO DE VOZ CONTÍNUA EM AMBIENTES ADVERSOS

CHRISTIAN DAYAN ARCOS GORDILLO 13 June 2018 (has links)
[pt] Esta tese apresenta e examina contribuições inovadoras no front-end dos sistemas de reconhecimento automático de voz (RAV) para o realce e reconhecimento de voz em ambientes adversos. A primeira proposta consiste em aplicar um filtro de mediana sobre a função de distribuição de probabilidade de cada coeficiente cepstral antes de utilizar uma transformação para um domínio invariante às distorções, com o objetivo de adaptar a voz ruidosa ao ambiente limpo de referência através da modificação de histogramas. Fundamentadas nos resultados de estudos psicofísicos do sistema auditivo humano, que utiliza como princípio o fato de que o som que atinge o ouvido é sujeito a um processo chamado Análise de Cena Auditiva (ASA), o qual examina como o sistema auditivo separa as fontes de som que compõem a entrada acústica, três novas abordagens aplicadas independentemente foram propostas para realce e reconhecimento de voz. A primeira aplica a estimativa de uma nova máscara no domínio espectral usando o conceito da transformada de Fourier de tempo curto (STFT). A máscara proposta aplica a técnica Local Binary Pattern (LBP) à relação sinal ruído (SNR) de cada unidade de tempo-frequência (T-F) para estimar uma máscara de vizinhança ideal (INM). Continuando com essa abordagem, propõe-se em seguida nesta tese o mascaramento usando as transformadas wavelet com base nos LBP para realçar os espectros temporais dos coeficientes wavelet nas altas frequências. Finalmente, é proposto um novo método de estimação da máscara INM, utilizando um algoritmo de aprendizagem supervisionado das Deep Neural Networks (DNN) com o objetivo de realizar a classificação de unidades T-F obtidas da saída dos bancos de filtros pertencentes a uma mesma fonte de som (ou predominantemente voz ou predominantemente ruído). O desempenho é comparado com as técnicas de máscara tradicionais IBM e IRM, tanto em termos de qualidade objetiva da voz, como através de taxas de erro de palavra. Os resultados das técnicas propostas evidenciam as melhoras obtidas em ambientes ruidosos, com diferenças significativamente superiores às abordagens convencionais. / [en] This thesis presents and examines innovative contributions in frontend of the automatic speech recognition systems (ASR) for enhancement and speech recognition in adverse environments. The first proposal applies a median filter on the probability distribution function of each cepstral coefficient before using a transformation to a distortion-invariant domain, to adapt the corrupted voice to the clean reference environment by modifying histograms. Based on the results of psychophysical studies of the human auditory system, which uses as a principle the fact that sound reaching the ear is subjected to a process called Auditory Scene Analysis (ASA), which examines how the auditory system separates the sound sources that make up the acoustic input, three new approaches independently applied were proposed for enhancement and speech recognition. The first applies the estimation of a new mask in the spectral domain using the short-time Fourier Transform (STFT) concept. The proposed mask applies the Local Binary Pattern (LBP) technique to the Signal-to-Noise Ratio (SNR) of each time-frequency unit (T-F) to estimate an Ideal Neighborhood Mask (INM). Continuing with this approach, the masking using LBP-based wavelet transforms to highlight the temporal spectra of wavelet coefficients at high frequencies is proposed in this thesis. Finally, a new method of estimation of the INM mask is proposed, using a supervised learning algorithm of Deep Neural Network (DNN) to classify the T-F units obtained from the output of the filter banks belonging to a same source of sound (or predominantly voice or predominantly noise). The performance is compared with traditional IBM and IRM mask techniques, both regarding objective voice quality and through word error rates. The results of the proposed methods show the improvements obtained in noisy environments, with differences significantly superior to the conventional approaches.
66

Modèle joint pour le traitement automatique de la langue : perspectives au travers des réseaux de neurones / Join model for NLP : a DNN framework

Tafforeau, Jérémie 20 November 2017 (has links)
Les recherches en Traitement Automatique des Langues (TAL) ont identifié différents niveaux d'analyse lexicale, syntaxique et sémantique. Il en découle un découpage hiérarchique des différentes tâches à réaliser afin d'analyser un énoncé. Les systèmes classiques du TAL reposent sur des analyseurs indépendants disposés en cascade au sein de chaînes de traitement (pipelines). Cette approche présente un certain nombre de limitations : la dépendance des modèles à la sélection empirique des traits, le cumul des erreurs dans le pipeline et la sensibilité au changement de domaine. Ces limitations peuvent conduire à des pertes de performances particulièrement importantes lorsqu'il existe un décalage entre les conditions d'apprentissage des modèles et celles d'utilisation. Un tel décalage existe lors de l'analyse de transcriptions automatiques de parole spontanée comme par exemple les conversations téléphoniques enregistrées dans des centres d'appels. En effet l'analyse d'une langue non-canonique pour laquelle il existe peu de données d'apprentissage, la présence de disfluences et de constructions syntaxiques spécifiques à l'oral ainsi que la présence d'erreurs de reconnaissance dans les transcriptions automatiques mènent à une détérioration importante des performances des systèmes d'analyse. C'est dans ce cadre que se déroule cette thèse, en visant à mettre au point des systèmes d'analyse à la fois robustes et flexibles permettant de dépasser les limitations des systèmes actuels à l'aide de modèles issus de l'apprentissage par réseaux de neurones profonds. / NLP researchers has identified different levels of linguistic analysis. This lead to a hierarchical division of the various tasks performed in order to analyze a text statement. The traditional approach considers task-specific models which are subsequently arranged in cascade within processing chains (pipelines). This approach has a number of limitations: the empirical selection of models features, the errors accumulation in the pipeline and the lack of robusteness to domain changes. These limitations lead to particularly high performance losses in the case of non-canonical language with limited data available such as transcriptions of conversations over phone. Disfluencies and speech-specific syntactic schemes, as well as transcription errors in automatic speech recognition systems, lead to a significant drop of performances. It is therefore necessary to develop robust and flexible systems. We intend to perform a syntactic and semantic analysis using a deep neural network multitask model while taking into account the variations of domain and/or language registers within the data.
67

Joint Optimization of Quantization and Structured Sparsity for Compressed Deep Neural Networks

January 2018 (has links)
abstract: Deep neural networks (DNN) have shown tremendous success in various cognitive tasks, such as image classification, speech recognition, etc. However, their usage on resource-constrained edge devices has been limited due to high computation and large memory requirement. To overcome these challenges, recent works have extensively investigated model compression techniques such as element-wise sparsity, structured sparsity and quantization. While most of these works have applied these compression techniques in isolation, there have been very few studies on application of quantization and structured sparsity together on a DNN model. This thesis co-optimizes structured sparsity and quantization constraints on DNN models during training. Specifically, it obtains optimal setting of 2-bit weight and 2-bit activation coupled with 4X structured compression by performing combined exploration of quantization and structured compression settings. The optimal DNN model achieves 50X weight memory reduction compared to floating-point uncompressed DNN. This memory saving is significant since applying only structured sparsity constraints achieves 2X memory savings and only quantization constraints achieves 16X memory savings. The algorithm has been validated on both high and low capacity DNNs and on wide-sparse and deep-sparse DNN models. Experiments demonstrated that deep-sparse DNN outperforms shallow-dense DNN with varying level of memory savings depending on DNN precision and sparsity levels. This work further proposed a Pareto-optimal approach to systematically extract optimal DNN models from a huge set of sparse and dense DNN models. The resulting 11 optimal designs were further evaluated by considering overall DNN memory which includes activation memory and weight memory. It was found that there is only a small change in the memory footprint of the optimal designs corresponding to the low sparsity DNNs. However, activation memory cannot be ignored for high sparsity DNNs. / Dissertation/Thesis / Masters Thesis Computer Engineering 2018
68

Deep Neural Architectures for Automatic Representation Learning from Multimedia Multimodal Data / Architectures neuronales profondes pour l'apprentissage de représentation multimodales de données multimédias

Vukotic, Verdran 26 September 2017 (has links)
La thèse porte sur le développement d'architectures neuronales profondes permettant d'analyser des contenus textuels ou visuels, ou la combinaison des deux. De manière générale, le travail tire parti de la capacité des réseaux de neurones à apprendre des représentations abstraites. Les principales contributions de la thèse sont les suivantes: 1) Réseaux récurrents pour la compréhension de la parole: différentes architectures de réseaux sont comparées pour cette tâche sur leurs facultés à modéliser les observations ainsi que les dépendances sur les étiquettes à prédire. 2) Prédiction d’image et de mouvement : nous proposons une architecture permettant d'apprendre une représentation d'une image représentant une action humaine afin de prédire l'évolution du mouvement dans une vidéo ; l'originalité du modèle proposé réside dans sa capacité à prédire des images à une distance arbitraire dans une vidéo. 3) Encodeurs bidirectionnels multimodaux : le résultat majeur de la thèse concerne la proposition d'un réseau bidirectionnel permettant de traduire une modalité en une autre, offrant ainsi la possibilité de représenter conjointement plusieurs modalités. L'approche été étudiée principalement en structuration de collections de vidéos, dons le cadre d'évaluations internationales où l'approche proposée s'est imposée comme l'état de l'art. 4) Réseaux adverses pour la fusion multimodale: la thèse propose d'utiliser les architectures génératives adverses pour apprendre des représentations multimodales en offrant la possibilité de visualiser les représentations dans l'espace des images. / In this dissertation, the thesis that deep neural networks are suited for analysis of visual, textual and fused visual and textual content is discussed. This work evaluates the ability of deep neural networks to learn automatic multimodal representations in either unsupervised or supervised manners and brings the following main contributions:1) Recurrent neural networks for spoken language understanding (slot filling): different architectures are compared for this task with the aim of modeling both the input context and output label dependencies.2) Action prediction from single images: we propose an architecture that allow us to predict human actions from a single image. The architecture is evaluated on videos, by utilizing solely one frame as input.3) Bidirectional multimodal encoders: the main contribution of this thesis consists of neural architecture that translates from one modality to the other and conversely and offers and improved multimodal representation space where the initially disjoint representations can translated and fused. This enables for improved multimodal fusion of multiple modalities. The architecture was extensively studied an evaluated in international benchmarks within the task of video hyperlinking where it defined the state of the art today.4) Generative adversarial networks for multimodal fusion: continuing on the topic of multimodal fusion, we evaluate the possibility of using conditional generative adversarial networks to lean multimodal representations in addition to providing multimodal representations, generative adversarial networks permit to visualize the learned model directly in the image domain.
69

Reduction of Temperature Forecast Errors with Deep Neural Networks / Reducering av temperaturprognosfel med djupa neuronnätverk

Isaksson, Robin January 2018 (has links)
Deep artificial neural networks is a type of machine learning which can be used to find and utilize patterns in data. One of their many applications is as method for regression analysis. In this thesis deep artificial neural networks were implemented in the application of estimating the error of surface temperature forecasts as produced by a numerical weather prediction model. An ability to estimate the error of forecasts is synonymous with the ability to reduce forecast errors as the estimated error can be offset from the actual forecast. Six years of forecast data from the period 2010--2015 produced by the European Centre for Medium-Range Weather Forecasts' (ECMWF) numerical weather prediction model together with data from fourteen meteorological observational stations were used to train and evaluate error-predicting deep neural networks. The neural networks were able to reduce the forecast errors for all the locations that were tested to a varying extent. The largest reduction in error was by 83.0\% of the original error or a 16.7\degcs decrease in the mean-square error. The performance of the neural networks' error reduction ability was compared with that of a contemporary Kalman filter as implemented by the Swedish Meteorological and Hydrological Institute (SMHI). It was shown that the neural network implementation had superior performance for six out of seven of the evaluated stations where the Kalman filter had marginally better performance at one station.
70

Efficient and Robust Deep Learning through Approximate Computing

Sanchari Sen (9178400) 28 July 2020 (has links)
<p>Deep Neural Networks (DNNs) have greatly advanced the state-of-the-art in a wide range of machine learning tasks involving image, video, speech and text analytics, and are deployed in numerous widely-used products and services. Improvements in the capabilities of hardware platforms such as Graphics Processing Units (GPUs) and specialized accelerators have been instrumental in enabling these advances as they have allowed more complex and accurate networks to be trained and deployed. However, the enormous computational and memory demands of DNNs continue to increase with growing data size and network complexity, posing a continuing challenge to computing system designers. For instance, state-of-the-art image recognition DNNs require hundreds of millions of parameters and hundreds of billions of multiply-accumulate operations while state-of-the-art language models require hundreds of billions of parameters and several trillion operations to process a single input instance. Another major obstacle in the adoption of DNNs, despite their impressive accuracies on a range of datasets, has been their lack of robustness. Specifically, recent efforts have demonstrated that small, carefully-introduced input perturbations can force a DNN to behave in unexpected and erroneous ways, which can have to severe consequences in several safety-critical DNN applications like healthcare and autonomous vehicles. In this dissertation, we explore approximate computing as an avenue to improve the speed and energy efficiency of DNNs, as well as their robustness to input perturbations.</p> <p> </p> <p>Approximate computing involves executing selected computations of an application in an approximate manner, while generating favorable trade-offs between computational efficiency and output quality. The intrinsic error resilience of machine learning applications makes them excellent candidates for approximate computing, allowing us to achieve execution time and energy reductions with minimal effect on the quality of outputs. This dissertation performs a comprehensive analysis of different approximate computing techniques for improving the execution efficiency of DNNs. Complementary to generic approximation techniques like quantization, it identifies approximation opportunities based on the specific characteristics of three popular classes of networks - Feed-forward Neural Networks (FFNNs), Recurrent Neural Networks (RNNs) and Spiking Neural Networks (SNNs), which vary considerably in their network structure and computational patterns.</p> <p> </p> <p>First, in the context of feed-forward neural networks, we identify sparsity, or the presence of zero values in the data structures (activations, weights, gradients and errors), to be a major source of redundancy and therefore, an easy target for approximations. We develop lightweight micro-architectural and instruction set extensions to a general-purpose processor core that enable it to dynamically detect zero values when they are loaded and skip future instructions that are rendered redundant by them. Next, we explore LSTMs (the most widely used class of RNNs), which map sequences from an input space to an output space. We propose hardware-agnostic approximations that dynamically skip redundant symbols in the input sequence and discard redundant elements in the state vector to achieve execution time benefits. Following that, we consider SNNs, which are an emerging class of neural networks that represent and process information in the form of sequences of binary spikes. Observing that spike-triggered updates along synaptic connections are the dominant operation in SNNs, we propose hardware and software techniques to identify connections that can be minimally impact the output quality and deactivate them dynamically, skipping any associated updates.</p> <p> </p> <p>The dissertation also delves into the efficacy of combining multiple approximate computing techniques to improve the execution efficiency of DNNs. In particular, we focus on the combination of quantization, which reduces the precision of DNN data-structures, and pruning, which introduces sparsity in them. We observe that the ability of pruning to reduce the memory demands of quantized DNNs decreases with precision as the overhead of storing non-zero locations alongside the values starts to dominate in different sparse encoding schemes. We analyze this overhead and the overall compression of three different sparse formats across a range of sparsity and precision values and propose a hybrid compression scheme that identifies that optimal sparse format for a pruned low-precision DNN.</p> <p> </p> <p>Along with improved execution efficiency of DNNs, the dissertation explores an additional advantage of approximate computing in the form of improved robustness. We propose ensembles of quantized DNN models with different numerical precisions as a new approach to increase robustness against adversarial attacks. It is based on the observation that quantized neural networks often demonstrate much higher robustness to adversarial attacks than full precision networks, but at the cost of a substantial loss in accuracy on the original (unperturbed) inputs. We overcome this limitation to achieve the best of both worlds, i.e., the higher unperturbed accuracies of the full precision models combined with the higher robustness of the low precision models, by composing them in an ensemble.</p> <p> </p> <p><br></p><p>In summary, this dissertation establishes approximate computing as a promising direction to improve the performance, energy efficiency and robustness of neural networks.</p>

Page generated in 0.0647 seconds