• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 127
  • 9
  • 9
  • 5
  • 4
  • 3
  • 3
  • 1
  • 1
  • 1
  • Tagged with
  • 186
  • 67
  • 59
  • 57
  • 56
  • 43
  • 40
  • 39
  • 38
  • 36
  • 36
  • 34
  • 31
  • 28
  • 22
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
41

Sobre a dimensão do quadrado de um espaço métrico compacto X de dimensão n e o conjunto dos mergulhos de X em R2n / Sobre a dimensão do quadrado de um espaço métrico compacto X de dimensão n e o conjunto dos mergulhos de X em R2n

Melo, Givanildo Donizeti de [UNESP] 23 March 2016 (has links)
Submitted by GIVANILDO DONIZETI DE MELO null (givadonimelo@hotmail.com) on 2016-05-12T02:56:09Z No. of bitstreams: 1 Dis. Mestrado.pdf: 1055550 bytes, checksum: 47636418da76c9ad7d114b45ea3e96c1 (MD5) / Approved for entry into archive by Felipe Augusto Arakaki (arakaki@reitoria.unesp.br) on 2016-05-13T16:57:01Z (GMT) No. of bitstreams: 1 melo_gd_me_sjrp.pdf: 1055550 bytes, checksum: 47636418da76c9ad7d114b45ea3e96c1 (MD5) / Made available in DSpace on 2016-05-13T16:57:01Z (GMT). No. of bitstreams: 1 melo_gd_me_sjrp.pdf: 1055550 bytes, checksum: 47636418da76c9ad7d114b45ea3e96c1 (MD5) Previous issue date: 2016-03-23 / Conselho Nacional de Desenvolvimento Científico e Tecnológico (CNPq) / Neste trabalho nós estudamos o seguinte resultado: para um espaço métrico compacto X, de dimensão n, o subespaço dos mergulhos de X em R2n é denso no espaço das funções contínuas de X em R2n se, e somente se, dim(X x X)<2n. A demonstração apresentada é aquela dada por J. Krasinkiewicz e por S. Spiez. / In this work we study the following result: given a compact metric space X of dimension n, the subspace consisting of all embeddings of X into R2n is dense in the space of all continuous maps of X into R2n if and only if dim(X x X)<2n. The presented proof is the one given by J. Krasinkiewicz e por S. Spiez.
42

Predicting Gene Functions and Phenotypes by combining Deep Learning and Ontologies

Kulmanov, Maxat 08 April 2020 (has links)
The amount of available protein sequences is rapidly increasing, mainly as a consequence of the development and application of high throughput sequencing technologies in the life sciences. It is a key question in the life sciences to identify the functions of proteins, and furthermore to identify the phenotypes that may be associated with a loss (or gain) of function in these proteins. Protein functions are generally determined experimentally, and it is clear that experimental determination of protein functions will not scale to the current { and rapidly increasing { amount of available protein sequences (over 300 million). Furthermore, identifying phenotypes resulting from loss of function is even more challenging as the phenotype is modi ed by whole organism interactions and environmental variables. It is clear that accurate computational prediction of protein functions and loss of function phenotypes would be of signi cant value both to academic research and to the biotechnology industry. We developed and expanded novel methods for representation learning, predicting protein functions and their loss of function phenotypes. We use deep neural network algorithm and combine them with symbolic inference into neural-symbolic algorithms. Our work signi cantly improves previously developed methods for predicting protein functions through methodological advances in machine learning, incorporation of broader data types that may be predictive of functions, and improved systems for neural-symbolic integration. The methods we developed are generic and can be applied to other domains in which similar types of structured and unstructured information exist. In future, our methods can be applied to prediction of protein function for metagenomic samples in order to evaluate the potential for discovery of novel proteins of industrial value. Also our methods can be applied to the prediction of loss of function phenotypes in human genetics and incorporate the results in a variant prioritization tool that can be applied to diagnose patients with Mendelian disorders.
43

Towards Learning Compact Visual Embeddings using Deep Neural Networks

January 2019 (has links)
abstract: Feature embeddings differ from raw features in the sense that the former obey certain properties like notion of similarity/dissimilarity in it's embedding space. word2vec is a preeminent example in this direction, where the similarity in the embedding space is measured in terms of the cosine similarity. Such language embedding models have seen numerous applications in both language and vision community as they capture the information in the modality (English language) efficiently. Inspired by these language models, this work focuses on learning embedding spaces for two visual computing tasks, 1. Image Hashing 2. Zero Shot Learning. The training set was used to learn embedding spaces over which similarity/dissimilarity is measured using several distance metrics like hamming / euclidean / cosine distances. While the above-mentioned language models learn generic word embeddings, in this work task specific embeddings were learnt which can be used for Image Retrieval and Classification separately. Image Hashing is the task of mapping images to binary codes such that some notion of user-defined similarity is preserved. The first part of this work focuses on designing a new framework that uses the hash-tags associated with web images to learn the binary codes. Such codes can be used in several applications like Image Retrieval and Image Classification. Further, this framework requires no labelled data, leaving it very inexpensive. Results show that the proposed approach surpasses the state-of-art approaches by a significant margin. Zero-shot classification is the task of classifying the test sample into a new class which was not seen during training. This is possible by establishing a relationship between the training and the testing classes using auxiliary information. In the second part of this thesis, a framework is designed that trains using the handcrafted attribute vectors and word vectors but doesn’t require the expensive attribute vectors during test time. More specifically, an intermediate space is learnt between the word vector space and the image feature space using the hand-crafted attribute vectors. Preliminary results on two zero-shot classification datasets show that this is a promising direction to explore. / Dissertation/Thesis / Masters Thesis Computer Engineering 2019
44

Neural Enhancement Strategies for Robust Speech Processing

Nawar, Mohamed Nabih Ali Mohamed 10 March 2023 (has links)
In real-world scenarios, speech signals are often contaminated with environmental noises, and reverberation, which degrades speech quality and intelligibility. Lately, the development of deep learning algorithms has marked milestones in speech- based research fields e.g. speech recognition, spoken language understanding, etc. As one of the crucial topics in the speech processing research area, speech enhancement aims to restore clean speech signals from noisy signals. In the last decades, many conventional speech enhancement statistical-based algorithms had been pro- posed. However, the performance of these approaches is limited in non-stationary noisy conditions. The raising of deep learning-based approaches for speech enhancement has led to revolutionary advances in their performance. In this context, speech enhancement is formulated as a supervised learning problem, which tackles the open challenges introduced by the speech enhancement conventional approaches. In general, deep learning speech enhancement approaches are categorized into frequency-domain and time-domain approaches. In particular, we experiment with the performance of the Wave-U-Net model, a solid and superior time-domain approach for speech enhancement. First, we attempt to improve the performance of back-end speech-based classification tasks in noisy conditions. In detail, we propose a pipeline that integrates the Wave-U-Net (later this model is modified to the Dilated Encoder Wave-U-Net) as a pre-processing stage for noise elimination with a temporal convolution network (TCN) for the intent classification task. Both models are trained independently from each other. Reported experimental results showed that the modified Wave-U-Net model not only improves the speech quality and intelligibility measured in terms of PESQ, and STOI metrics, but also improves the back-end classification accuracy. Later, it was observed that the dis-joint training approach often introduces signal distortion in the output of the speech enhancement module. Thus, it can deteriorate the back-end performance. Motivated by this, we introduce a set of fully time- domain joint training pipelines that combine the Wave-U-Net model with the TCN intent classifier. The difference between these architectures is the interconnections between the front-end and back-end. All architectures are trained with a loss function that combines the MSE loss as the front-end loss with the cross-entropy loss for the classification task. Based on our observations, we claim that the JT architecture with equally balancing both components’ contributions yields better classification accuracy. Lately, the release of large-scale pre-trained feature extraction models has considerably simplified the development of speech classification and recognition algorithms. However, environmental noise and reverberation still negatively affect performance, making robustness in noisy conditions mandatory in real-world applications. One way to mitigate the noise effect is to integrate a speech enhancement front-end that removes artifacts from the desired speech signals. Unlike the state-of-the-art enhancement approaches that operate either on speech spectrogram, or directly on time-domain signals, we study how enhancement can be applied directly on the speech embeddings, extracted using Wav2Vec, and WavLM models. We investigate a variety of training approaches, considering different flavors of joint and disjoint training of the speech enhancement front-end and of the classification/recognition back-end. We perform exhaustive experiments on the Fluent Speech Commands and Google Speech Commands datasets, contaminated with noises from the Microsoft Scalable Noisy Speech Dataset, as well as on LibriSpeech, contaminated with noises from the MUSAN dataset, considering intent classification, keyword spotting, and speech recognition tasks respectively. Results show that enhancing the speech em-bedding is a viable and computationally effective approach, and provide insights about the most promising training approaches.
45

Improving Document Clustering by Refining Overlapping Cluster Regions

Upadhye, Akshata Rajendra January 2022 (has links)
No description available.
46

Extractive Text Summarization of Greek News Articles Based on Sentence-Clusters

Kantzola, Evangelia January 2020 (has links)
This thesis introduces an extractive summarization system for Greek news articles based on sentence clustering. The main purpose of the paper is to evaluate the impact of three different types of text representation, Word2Vec embeddings, TF-IDF and LASER embeddings, on the summarization task. By taking these techniques into account, we build three different versions of the initial summarizer. Moreover, we create a new corpus of gold standard summaries to evaluate them against the system summaries. The new collection of reference summaries is merged with a part of the MultiLing Pilot 2011 in order to constitute our main dataset. We perform both automatic and human evaluation. Our automatic ROUGE results suggest that System A which employs Average Word2Vec vectors to create sentence embeddings, outperforms the other two systems by yielding higher ROUGE-L F-scores. Contrary to our initial hypotheses, System C using LASER embeddings fails to surpass even the Word2Vec embeddings method, showing sometimes a weak sentence representation. With regard to the scores obtained by the manual evaluation task, we observe that System A using Average Word2Vec vectors and System C with LASER embeddings tend to produce more coherent and adequate summaries than System B employing TF-IDF. Furthermore, the majority of system summaries are rated very high with respect to non-redundancy. Overall, System A utilizing Average Word2Vec embeddings performs quite successfully according to both evaluations.
47

Leveraging Degree of Isomorphism to Improve Cross-Lingual Embedding Space for Low-Resource Languages

Bhowmik, Kowshik January 2022 (has links)
No description available.
48

[en] A FAST AND SPACE-ECONOMICAL APPROACH TO WORD MOVER S DISTANCE / [pt] UMA ABORDAGEM RÁPIDA E ECONÔMICA PARA WORD MOVER S DISTANCE

MATHEUS TELLES WERNER 02 April 2020 (has links)
[pt] O Word Mover s Distance (WMD) proposto por Kusner et al. [ICML,2015] é uma função de distância entre documentos que se aproveita das relações semânticas entre palavras extraidas por suas Word Embeddings. Essa função de distância se mostrou bastante eficaz, obtendo taxas de erro estado da arte para problemas de classificação, porém ao mesmo tempo inviável para largas coleções ou grandes documentos devido a ser necessário computar um problema de transporte em um grafo bipartido completo para cada par de documentos. Assumindo algumas hipóteses, que são respaldadas por propriedades empíricas das distâncias entre as Word Embeddings, nós simplificamos o WMD de forma a obter uma nova função de distância o qual requer a solução de um problema de fluxo máximo em um grafo esparço, que pode ser resolvido mais rapidamente do que um problema de transporte em um grafo denso. Nossos experimentos mostram que conseguimos obter ganhos de performance até três ordens de magnitude acima do WMD enquanto mantendo as mesmas taxas de erro na tarefa de classificação de documentos. / [en] The Word Mover s Distance (WMD) proposed in Kusner et. al. [ICML,2015] is a distance between documents that takes advantage of semantic relations among words that are captured by their Word Embeddings. This distance proved to be quite effective, obtaining state-of-the-art error rates for classification tasks, but also impracticable for large collections or documents because it needs to compute a transportation problem on a complete bipartite graph for each pair of documents. By using assumptions, that are supported by empirical properties of the distances between Word Embeddings, we simplify WMD so that we obtain a new distance whose computation requires the solution of a max flow problem in a sparse graph, which can be solved much faster than the transportation problem in a dense graph. Our experiments show that we can obtain a performance gain up to three orders of magnitude over WMD while maintaining the same error rates in document classification tasks.
49

Étude sur les représentations continues de mots appliquées à la détection automatique des erreurs de reconnaissance de la parole / A study of continuous word representations applied to the automatic detection of speech recognition errors

Ghannay, Sahar 20 September 2017 (has links)
Nous abordons, dans cette thèse, une étude sur les représentations continues de mots (en anglais word embeddings) appliquées à la détection automatique des erreurs dans les transcriptions de la parole. Notre étude se concentre sur l’utilisation d’une approche neuronale pour améliorer la détection automatique des erreurs dans les transcriptions automatiques, en exploitant les word embeddings. L’exploitation des embeddings repose sur l’idée que la détection d’erreurs consiste à trouver les possibles incongruités linguistiques ou acoustiques au sein des transcriptions automatiques. L’intérêt est donc de trouver la représentation appropriée du mot qui permet de capturer des informations pertinentes pour pouvoir détecter ces anomalies. Notre contribution dans le cadre de cette thèse porte sur plusieurs axes. D’abord, nous commençons par une étude préliminaire dans laquelle nous proposons une architecture neuronale capable d’intégrer différents types de descripteurs, y compris les embeddings. Ensuite, nous nous focalisons sur une étude approfondie des représentations continues de mots. Cette étude porte d’une part sur l’évaluation de différents types d’embeddings linguistiques puis sur leurs combinaisons. D’autre part, elle s’intéresse aux embeddings acoustiques de mots. Puis, nous présentons une étude sur l’analyse des erreurs de classifications, qui a pour objectif de percevoir les erreurs difficiles à détecter.Finalement, nous exploitons les embeddings linguistiques et acoustiques ainsi que l’information fournie par notre système de détections d’erreurs dans plusieurs cadres applicatifs. / My thesis concerns a study of continuous word representations applied to the automatic detection of speech recognition errors. Our study focuses on the use of a neural approach to improve ASR errors detection, using word embeddings. The exploitation of continuous word representations is motivated by the fact that ASR error detection consists on locating the possible linguistic or acoustic incongruities in automatic transcriptions. The aim is therefore to find the appropriate word representation which makes it possible to capture pertinent information in order to be able to detect these anomalies. Our contribution in this thesis concerns several initiatives. First, we start with a preliminary study in which we propose a neural architecture able to integrate different types of features, including word embeddings. Second, we propose a deep study of continuous word representations. This study focuses on the evaluation of different types of linguistic word embeddings and their combination in order to take advantage of their complementarities. On the other hand, it focuses on acoustic word embeddings. Then, we present a study on the analysis of classification errors, with the aim of perceiving the errors that are difficult to detect. Perspectives for improving the performance of our system are also proposed, by modeling the errors at the sentence level. Finally, we exploit the linguistic and acoustic embeddings as well as the information provided by our ASR error detection system in several downstream applications.
50

Modèles d'embeddings à valeurs complexes pour les graphes de connaissances / Complex-Valued Embedding Models for Knowledge Graphs

Trouillon, Théo 29 September 2017 (has links)
L'explosion de données relationnelles largement disponiblessous la forme de graphes de connaissances a permisle développement de multiples applications, dont les agents personnels automatiques,les systèmes de recommandation et l'amélioration desrésultats de recherche en ligne.La grande taille et l'incomplétude de ces bases de donnéesnécessite le développement de méthodes de complétionautomatiques pour rendre ces applications viables.La complétion de graphes de connaissances, aussi appeléeprédiction de liens, se doit de comprendre automatiquementla structure des larges graphes de connaissances (graphes dirigéslabellisés) pour prédire les entrées manquantes (les arêtes labellisées).Une approche gagnant en popularité consiste à représenter ungraphe de connaissances comme un tenseur d'ordre 3, etd'utiliser des méthodes de décomposition de tenseur pourprédire leurs entrées manquantes.Les modèles de factorisation existants proposent différentscompromis entre leur expressivité, et leur complexité en temps et en espace.Nous proposons un nouveau modèle appelé ComplEx, pour"Complex Embeddings", pour réconcilier expressivité etcomplexité par l'utilisation d'une factorisation en nombre complexes,dont nous explorons le lien avec la diagonalisation unitaire.Nous corroborons notre approche théoriquement en montrantque tous les graphes de connaissances possiblespeuvent être exactement décomposés par le modèle proposé.Notre approche, basées sur des embeddings complexesreste simple, car n'impliquant qu'un produit trilinéaire complexe,là où d'autres méthodes recourent à des fonctions de compositionde plus en plus compliquées pour accroître leur expressivité.Le modèle proposé ayant une complexité linéaire en tempset en espace est passable à l'échelle, tout endépassant les approches existantes sur les jeux de données de référencepour la prédiction de liens.Nous démontrons aussi la capacité de ComplEx àapprendre des représentations vectorielles utiles pour d'autres tâches,en enrichissant des embeddings de mots, qui améliorentles prédictions sur le problème de traitement automatiquedu langage d'implication entre paires de phrases.Dans la dernière partie de cette thèse, nous explorons lescapacités de modèles de factorisation à apprendre lesstructures relationnelles à partir d'observations.De part leur nature vectorielle,il est non seulement difficile d'interpréter pourquoicette classe de modèles fonctionne aussi bien,mais aussi où ils échouent et comment ils peuventêtre améliorés. Nous conduisons une étude expérimentalesur les modèles de l'état de l'art, non pas simplementpour les comparer, mais pour comprendre leur capacitésd'induction. Pour évaluer les forces et faiblessesde chaque modèle, nous créons d'abord des tâches simplesreprésentant des propriétés atomiques despropriétés des relations des graphes de connaissances ;puis des tâches représentant des inférences multi-relationnellescommunes au travers de généalogies synthétisées.À partir de ces résultatsexpérimentaux, nous proposons de nouvelles directionsde recherches pour améliorer les modèles existants,y compris ComplEx. / The explosion of widely available relational datain the form of knowledge graphsenabled many applications, including automated personalagents, recommender systems and enhanced web search results.The very large size and notorious incompleteness of these data basescalls for automatic knowledge graph completion methods to make these applicationsviable. Knowledge graph completion, also known as link-prediction,deals with automatically understandingthe structure of large knowledge graphs---labeled directed graphs---topredict missing entries---labeled edges. An increasinglypopular approach consists in representing knowledge graphs as third-order tensors,and using tensor factorization methods to predict their missing entries.State-of-the-art factorization models propose different trade-offs between modelingexpressiveness, and time and space complexity. We introduce a newmodel, ComplEx---for Complex Embeddings---to reconcile both expressivenessand complexity through the use of complex-valued factorization, and exploreits link with unitary diagonalization.We corroborate our approach theoretically and show that all possibleknowledge graphs can be exactly decomposed by the proposed model.Our approach based on complex embeddings is arguably simple,as it only involves a complex-valued trilinear product,whereas other methods resort to more and more complicated compositionfunctions to increase their expressiveness. The proposed ComplEx model isscalable to large data sets as it remains linear in both space and time, whileconsistently outperforming alternative approaches on standardlink-prediction benchmarks. We also demonstrateits ability to learn useful vectorial representations for other tasks,by enhancing word embeddings that improve performanceson the natural language problem of entailment recognitionbetween pair of sentences.In the last part of this thesis, we explore factorization models abilityto learn relational patterns from observed data.By their vectorial nature, it is not only hard to interpretwhy this class of models works so well,but also to understand where they fail andhow they might be improved. We conduct an experimentalsurvey of state-of-the-art models, not towardsa purely comparative end, but as a means to get insightabout their inductive abilities.To assess the strengths and weaknesses of each model, we create simple tasksthat exhibit first, atomic properties of knowledge graph relations,and then, common inter-relational inference through synthetic genealogies.Based on these experimental results, we propose new researchdirections to improve on existing models, including ComplEx.

Page generated in 0.0759 seconds