• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 929
  • 156
  • 74
  • 55
  • 27
  • 23
  • 18
  • 13
  • 10
  • 9
  • 8
  • 7
  • 5
  • 5
  • 4
  • Tagged with
  • 1601
  • 1601
  • 1601
  • 622
  • 565
  • 464
  • 383
  • 376
  • 266
  • 256
  • 245
  • 228
  • 221
  • 208
  • 204
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
1121

Towards automated learning from software development issues : Analyzing open source project repositories using natural language processing and machine learning techniques

Salov, Aleksandar January 2017 (has links)
This thesis presents an in-depth investigation on the subject of how natural language processing and machine learning techniques can be utilized in order to perform a comprehensive analysis of programming issues found in different open source project repositories hosted on GitHub. The research is focused on examining issues gathered from a number of JavaScript repositories based on their user generated textual description. The primary goal of the study is to explore how natural language processing and machine learning methods can facilitate the process of identifying and categorizing distinct issue types. Furthermore, the research goes one step further and investigates how these same techniques can support users in searching for potential solutions to these issues. For this purpose, an initial proof-of-concept implementation is developed, which collects over 30 000 JavaScript issues from over 100 GitHub repositories. Then, the system extracts the titles of the issues, cleans and processes the data, before supplying it to an unsupervised clustering model which tries to uncover any discernible similarities and patterns within the examined dataset. What is more, the main system is supplemented by a dedicated web application prototype, which enables users to utilize the underlying machine learning model in order to find solutions to their programming related issues. Furthermore, the developed implementation is meticulously evaluated through a number of measures. First of all, the trained clustering model is assessed by two independent groups of external reviewers - one group of fellow researchers and another group of practitioners in the software industry, so as to determine whether the resulting categories contain distinct types of issues. Moreover, in order to find out if the system can facilitate the search for issue solutions, the web application prototype is tested in a series of user sessions with participants who are not only representative of the main target group which can benefit most from such a system, but who also have a mixture of both practical and theoretical backgrounds. The results of this research demonstrate that the proposed solution can effectively categorize issues according to their type, solely based on the user generated free-text title. This provides strong evidence that natural language processing and machine learning techniques can be utilized for analyzing issues and automating the overall learning process. However, the study was unable to conclusively determine whether these same methods can aid the search for issue solutions. Nevertheless, the thesis provides a detailed account of how this problem was addressed and can therefore serve as the basis for future research.
1122

Unsupervised Information Extraction From Text – Extraction and Clustering of Relations between Entities / Extraction d'Information Non Supervisée à Partir de Textes – Extraction et Regroupement de Relations entre Entités

Wang, Wei 16 May 2013 (has links)
L'extraction d'information non supervisée en domaine ouvert est une évolution récente de l'extraction d'information adaptée à des contextes dans lesquels le besoin informationnel est faiblement spécifié. Dans ce cadre, la thèse se concentre plus particulièrement sur l'extraction et le regroupement de relations entre entités en se donnant la possibilité de traiter des volumes importants de données.L'extraction de relations se fixe plus précisément pour objectif de faire émerger des relations de type non prédéfini à partir de textes. Ces relations sont de nature semi-structurée : elles associent des éléments faisant référence à des structures de connaissance définies a priori, dans le cas présent les entités qu’elles relient, et des éléments donnés uniquement sous la forme d’une caractérisation linguistique, en l’occurrence leur type. Leur extraction est réalisée en deux temps : des relations candidates sont d'abord extraites sur la base de critères simples mais efficaces pour être ensuite filtrées selon des critères plus avancés. Ce filtrage associe lui-même deux étapes : une première étape utilise des heuristiques pour éliminer rapidement les fausses relations en conservant un bon rappel tandis qu'une seconde étape se fonde sur des modèles statistiques pour raffiner la sélection des relations candidates.Le regroupement de relations a quant à lui un double objectif : d’une part, organiser les relations extraites pour en caractériser le type au travers du regroupement des relations sémantiquement équivalentes et d’autre part, en offrir une vue synthétique. Il est réalisé dans le cas présent selon une stratégie multiniveau permettant de prendre en compte à la fois un volume important de relations et des critères de regroupement élaborés. Un premier niveau de regroupement, dit de base, réunit des relations proches par leur expression linguistique grâce à une mesure de similarité vectorielle appliquée à une représentation de type « sac-de-mots » pour former des clusters fortement homogènes. Un second niveau de regroupement est ensuite appliqué pour traiter des phénomènes plus sémantiques tels que la synonymie et la paraphrase et fusionner des clusters de base recouvrant des relations équivalentes sur le plan sémantique. Ce second niveau s'appuie sur la définition de mesures de similarité au niveau des mots, des relations et des clusters de relations en exploitant soit des ressources de type WordNet, soit des thésaurus distributionnels. Enfin, le travail illustre l’intérêt de la mise en œuvre d’un clustering des relations opéré selon une dimension thématique, en complément de la dimension sémantique des regroupements évoqués précédemment. Ce clustering est réalisé de façon indirecte au travers du regroupement des contextes thématiques textuels des relations. Il offre à la fois un axe supplémentaire de structuration des relations facilitant leur appréhension globale mais également le moyen d’invalider certains regroupements sémantiques fondés sur des termes polysémiques utilisés avec des sens différents. La thèse aborde également le problème de l'évaluation de l'extraction d'information non supervisée par l'entremise de mesures internes et externes. Pour les mesures externes, une méthode interactive est proposée pour construire manuellement un large ensemble de clusters de référence. Son application sur un corpus journalistique de grande taille a donné lieu à la construction d'une référence vis-à-vis de laquelle les différentes méthodes de regroupement proposées dans la thèse ont été évaluées. / Unsupervised information extraction in open domain gains more and more importance recently by loosening the constraints on the strict definition of the extracted information and allowing to design more open information extraction systems. In this new domain of unsupervised information extraction, this thesis focuses on the tasks of extraction and clustering of relations between entities at a large scale. The objective of relation extraction is to discover unknown relations from texts. A relation prototype is first defined, with which candidates of relation instances are initially extracted with a minimal criterion. To guarantee the validity of the extracted relation instances, a two-step filtering procedures is applied: the first step with filtering heuristics to remove efficiently large amount of false relations and the second step with statistical models to refine the relation candidate selection. The objective of relation clustering is to organize extracted relation instances into clusters so that their relation types can be characterized by the formed clusters and a synthetic view can be offered to end-users. A multi-level clustering procedure is design, which allows to take into account the massive data and diverse linguistic phenomena at the same time. First, the basic clustering groups similar relation instances by their linguistic expressions using only simple similarity measures on a bag-of-word representation for relation instances to form high-homogeneous basic clusters. Second, the semantic clustering aims at grouping basic clusters whose relation instances share the same semantic meaning, dealing with more particularly phenomena such as synonymy or more complex paraphrase. Different similarities measures, either based on resources such as WordNet or distributional thesaurus, at the level of words, relation instances and basic clusters are analyzed. Moreover, a topic-based relation clustering is proposed to consider thematic information in relation clustering so that more precise semantic clusters can be formed. Finally, the thesis also tackles the problem of clustering evaluation in the context of unsupervised information extraction, using both internal and external measures. For the evaluations with external measures, an interactive and efficient way of building reference of relation clusters proposed. The application of this method on a newspaper corpus results in a large reference, based on which different clustering methods are evaluated.
1123

Semantiska modeller för syntetisk textgenerering - en jämförelsestudie / Semantic Models for Synthetic Textgeneration - A Comparative Study

Åkerström, Joakim, Peñaloza Aravena, Carlos January 2018 (has links)
Denna kunskapsöversikt undersöker det forskningsfält som rör musikintegrerad matematikundervisning. Syftet med översikten är att få en inblick i hur musiken påverkar elevernas matematikprestationer samt hur forskningen ser ut inom denna kombination. Därför är vår frågeställning: Vad kännetecknar forskningen om integrationen mellan matematik och musik? För att besvara denna fråga har vi utfört litteratursökningar för att finna studier och artiklar som tillsammans bildar en överblick. Med hjälp av den metod som Claes Nilholm beskriver i SMART (2016) har vi skapat en struktur för hur vi arbetat. Ur det material som vi fann under sökningarna har vi funnit mönster som talar för musikens positiva inverkan på matematikundervisning. Förmågan att uttrycka sina känslor i form av ord eller beröra andra med dem har alltid varit enbeundransvärd och sällsynt egenskap. Det här projektet handlar om att skapa en text generatorkapabel av att skriva text i stil med enastående män och kvinnor med den här egenskapen. Arbetet har genomförts genom att träna ett neuronnät med citat skrivna av märkvärdigamänniskor såsom Oscar Wilde, Mark Twain, Charles Dickens, etc. Nätverket samarbetar med två olika semantiska modeller: Word2Vec och One-Hot och alla tre är delarna som vår textgenerator består av. Med dessa genererade texterna gjordes en enkätudersökning för att samlaåsikter från studenter om kvaliteten på de genererade texterna för att på så vis utvärderalämpligheten hos de olika semantiska modellerna. Efter analysen av resultatet lärde vi oss att de flesta respondenter tyckte att texterna de läste var sammanhängande och roliga. Vi lärde oss också att Word2Vec, presterade signifikant bättre än One-hot. / The ability of expressing feelings in words or moving others with them has always been admired and rare feature. This project involves creating a text generator able to write text in the style of remarkable men and women with this ability, this gift. This has been done by training a neural network with quotes written by outstanding people such as Oscar Wilde, Mark Twain, Charles Dickens, et alt. This neural network cooperate with two different semantic models: Word2Vec and One-Hot and the three of them compound our text generator. With the text generated we carried out a survey in order to collect the opinion of students about the quality of the text generated by our generator. Upon examination of the result, we proudly learned that most of the respondents thought the texts were coherent and fun to read, we also learned that the former semantic model performed, not by a factor of magnitude, better than the latter.
1124

Automatická korektura chyb ve výstupu strojového překladu / Automatic Error Correction of Machine Translation Output

Variš, Dušan January 2016 (has links)
We present MLFix, an automatic statistical post-editing system, which is a spiritual successor to the rule- based system, Depfix. The aim of this thesis was to investigate the possible approaches to automatic identification of the most common morphological errors produced by the state-of-the-art machine translation systems and to train sufficient statistical models built on the acquired knowledge. We performed both automatic and manual evaluation of the system and compared the results with Depfix. The system was mainly developed on the English-to- Czech machine translation output, however, the aim was to generalize the post-editing process so it can be applied to other language pairs. We modified the original pipeline to post-edit English-German machine translation output and performed additional evaluation of this modification. Powered by TCPDF (www.tcpdf.org)
1125

Zodpovídání dotazů o obrázcích / Visual Question Answering

Hajič, Jakub January 2017 (has links)
Visual Question Answering (VQA) is a recently proposed multimodal task in the general area of machine learning. The input to this task consists of a single image and an associated natural language question, and the output is the answer to that question. In this thesis we propose two incremental modifications to an existing model which won the VQA Challenge in 2016 using multimodal compact bilinear pooling (MCB), a novel way of combining modalities. First, we added the language attention mechanism, and on top of that we introduce an image attention mechanism focusing on objects detected in the image ("region attention"). We also experiment with ways of combining these in a single end- to-end model. The thesis describes the MCB model and our extensions and their two different implementations, and evaluates them on the original VQA challenge dataset for direct comparison with the original work. 1
1126

Extrakce relací v policejních záznamech / Relation extraction in police records

Ejem, Richard January 2017 (has links)
This work describes a problem of relation extraction between named entities on the sentence level, assuming that the named entities are already tagged in the text, on the domain of police reports written by the Anti-drug Department of the Police of the Czech Republic. We have used various methods of machine learning in combination with tree kernel functions and methods based on sentence syntax rules. None of the used methods had satisfying results on the data provided by the Police of the Czech Republic. Following analysis showed that tagging of the relations in the data was missing many relations, which were obvious to a human reader. That was found to be the reason why the supervised machine learning was not successful. Later in this work we present several rules for recognizing relations which we have identified manually. Findings in this work may be helpful for future research of processing these police reports.
1127

[en] A MACHINE LEARNING APPROACH FOR PORTUGUESE TEXT CHUNKING / [pt] UMA ABORDAGEM DE APRENDIZADO DE MÁQUINA PARA SEGMENTAÇÃO TEXTUAL NO PORTUGUÊS

GUILHERME CARLOS DE NAPOLI FERREIRA 10 February 2017 (has links)
[pt] A segmentação textual é uma tarefa de Processamento de Linguagem Natural muito relevante, e consiste na divisão de uma sentença em sequências disjuntas de palavras sintaticamente relacionadas. Um dos fatores que contribuem fortemente para sua importância é que seus resultados são usados como significativos dados de entrada para problemas linguísticos mais complexos. Dentre esses problemas estão a análise sintática completa, a identificação de orações, a análise sintática de dependência, a identificação de papéis semânticos e a tradução automática. Em particular, abordagens de Aprendizado de Máquina para estas tarefas beneficiam-se intensamente com o uso de um atributo de segmentos textuais. Um número respeitável de eficazes estratégias de extração de segmentos para o inglês foi apresentado ao longo dos últimos anos. No entanto, até onde podemos determinar, nenhum estudo abrangente foi feito sobre a segmentação textual para o português, de modo a demonstrar seus benefícios. O escopo deste trabalho é a língua portuguesa, e seus objetivos são dois. Primeiramente, analisamos o impacto de diferentes definições de segmentação, utilizando uma heurística para gerar segmentos que depende de uma análise sintática completa previamente anotada. Em seguida, propomos modelos de Aprendizado de Máquina para a extração de segmentos textuais baseados na técnica Aprendizado de Transformações Guiado por Entropia. Fazemos uso do corpus Bosque, do projeto Floresta Sintá(c)tica, nos nossos experimentos. Utilizando os valores determinados diretamente por nossa heurística, um atributo de segmentos textuais aumenta a métrica F beta igual 1 de um sistema de identificação de orações para o português em 6.85 e a acurácia de um sistema de análise sintática de dependência em 1.54. Ademais, nosso melhor extrator de segmentos apresenta um F beta igual 1 de 87.95 usando anotaçoes automáticas de categoria gramatical. As descobertas indicam que, de fato, a informação de segmentação textual derivada por nossa heurística é relevante para tarefas mais elaboradas cujo foco é o português. Além disso, a eficácia de nossos extratores é comparável à dos similares do estado-da-arte para o inglês, tendo em vista que os modelos propostos são razoavelmente simples. / [en] Text chunking is a very relevant Natural Language Processing task, and consists in dividing a sentence into disjoint sequences of syntactically correlated words. One of the factors that highly contribute to its importance is that its results are used as a significant input to more complex linguistic problems. Among those problems we have full parsing, clause identification, dependency parsing, semantic role labeling and machine translation. In particular, Machine Learning approaches to these tasks greatly benefit from the use of a chunk feature. A respectable number of effective chunk extraction strategies for the English language has been presented during the last few years. However, as far as we know, no comprehensive study has been done on text chunking for Portuguese, showing its benefits. The scope of this work is the Portuguese language, and its objective is twofold. First, we analyze the impact of different chunk definitions, using a heuristic to generate chunks that relies on previous full parsing annotation. Then, we propose Machine Learning models for chunk extraction based on the Entropy Guided Transformation Learning technique. We employ the Bosque corpus, from the Floresta Sintá(c)tica project, for our experiments. Using golden values determined by our heuristic, a chunk feature improves the F beta equal 1 score of a clause identification system for Portuguese by 6.85 and the accuracy of a dependency parsing system by 1.54. Moreover, our best chunk extractor achieves a F beta equal 1 of 87.95 when automatic part-of-speech tags are applied. The empirical findings indicate that, indeed, chunk information derived by our heuristic is relevant to more elaborate tasks targeted on Portuguese. Furthermore, the effectiveness of our extractors is comparable to the state-of-the-art similars for English, taking into account that our proposed models are reasonably simple.
1128

[en] DEEP ARCHITECTURE FOR QUOTATION EXTRACTION / [pt] ARQUITETURA PROFUNDA PARA EXTRAÇÃO DE CITAÇÕES

LUIS FELIPE MULLER DE OLIVEIRA HENRIQUES 28 July 2017 (has links)
[pt] A Extração e Atribuição de Citações é a tarefa de identificar citações de um texto e associá-las a seus autores. Neste trabalho, apresentamos um sistema de Extração e Atribuição de Citações para a língua portuguesa. A tarefa de Extração e Atribuição de Citações foi abordada anteriormente utilizando diversas técnicas e para uma variedade de linguagens e datasets. Os modelos tradicionais para a tarefa consistem em extrair manualmente um rico conjunto de atributos e usá-los para alimentar um classificador raso. Neste trabalho, ao contrário da abordagem tradicional, evitamos usar atributos projetados à mão, usando técnicas de aprendizagem não supervisionadas e redes neurais profundas para automaticamente aprender atributos relevantes para resolver a tarefa. Ao evitar a criação manual de atributos, nosso modelo de aprendizagem de máquina tornou-se facilmente adaptável a outros domínios e linguagens. Nosso modelo foi treinado e avaliado no corpus GloboQuotes e sua métrica de desempenho F1 é igual a 89.43 por cento. / [en] Quotation Extraction and Attribution is the task of identifying quotations from a given text and associating them to their authors. In this work, we present a Quotation Extraction and Attribution system for the Portuguese language. The Quotation Extraction and Attribution task has been previously approached using various techniques and for a variety of languages and datasets. Traditional models to this task consist of extracting a rich set of hand-designed features and using them to feed a shallow classifier. In this work, unlike the traditional approach, we avoid using hand-designed features using unsupervised learning techniques and deep neural networks to automatically learn relevant features to solve the task. By avoiding design features by hand, our machine learning model became easily adaptable to other languages and domains. Our model is trained and evaluated at the GloboQuotes corpus, and its F1 performance metric is equal to 89.43 percent.
1129

Le web social et le web sémantique pour la recommandation de ressources pédagogiques / Social Web and semantic Web for recommendation in e-learning

Ghenname, Mérième 02 December 2015 (has links)
Ce travail de recherche est conjointement effectué dans le cadre d’une cotutelle entre deux universités : en France l’Université Jean Monnet de Saint-Etienne, laboratoire Hubert Curien sous la supervision de Mme Frédérique Laforest, M. Christophe Gravier et M. Julien Subercaze, et au Maroc l’Université Mohamed V de Rabat, équipe LeRMA sous la supervision de Mme Rachida Ajhoun et Mme Mounia Abik. Les connaissances et les apprentissages sont des préoccupations majeures dans la société d’aujourd’hui. Les technologies de l’apprentissage humain visent à promouvoir, stimuler, soutenir et valider le processus d’apprentissage. Notre approche explore les opportunités soulevées en faisant coopérer le Web Social et le Web sémantique pour le e-learning. Plus précisément, nous travaillons sur l’enrichissement des profils des apprenants en fonction de leurs activités sur le Web Social. Le Web social peut être une source d’information très importante à explorer, car il implique les utilisateurs dans le monde de l’information et leur donne la possibilité de participer à la construction et à la diffusion de connaissances. Nous nous focalisons sur le suivi des différents types de contributions, dans les activités de collaboration spontanée des apprenants sur les réseaux sociaux. Le profil de l’apprenant est non seulement basé sur la connaissance extraite de ses activités sur le système de e-learning, mais aussi de ses nombreuses activités sur les réseaux sociaux. En particulier, nous proposons une méthodologie pour exploiter les hashtags contenus dans les écrits des utilisateurs pour la génération automatique des intérêts des apprenants dans le but d’enrichir leurs profils. Cependant les hashtags nécessitent un certain traitement avant d’être source de connaissances sur les intérêts des utilisateurs. Nous avons défini une méthode pour identifier la sémantique de hashtags et les relations sémantiques entre les significations des différents hashtags. Par ailleurs, nous avons défini le concept de Folksionary, comme un dictionnaire de hashtags qui pour chaque hashtag regroupe ses définitions en unités de sens. Les hashtags enrichis en sémantique sont donc utilisés pour nourrir le profil de l’apprenant de manière à personnaliser les recommandations sur le matériel d’apprentissage. L’objectif est de construire une représentation sémantique des activités et des intérêts des apprenants sur les réseaux sociaux afin d’enrichir leurs profils. Nous présentons également notre approche générale de recommandation multidimensionnelle dans un environnement d’e-learning. Nous avons conçu une approche fondée sur trois types de filtrage : le filtrage personnalisé à base du profil de l’apprenant, le filtrage social à partir des activités de l’apprenant sur les réseaux sociaux, et le filtrage local à partir des statistiques d’interaction de l’apprenant avec le système. Notre implémentation s’est focalisée sur la recommandation personnalisée / This work has been jointly supervised by U. Jean Monnet Saint Etienne, in the Hubert Curien Lab (Frederique Laforest, Christophe Gravier, Julien Subercaze) and U. Mohamed V Rabat, LeRMA ENSIAS (Rachida Ahjoun, Mounia Abik). Knowledge, education and learning are major concerns in today’s society. The technologies for human learning aim to promote, stimulate, support and validate the learning process. Our approach explores the opportunities raised by mixing the Social Web and the Semantic Web technologies for e-learning. More precisely, we work on discovering learners profiles from their activities on the social web. The Social Web can be a source of information, as it involves users in the information world and gives them the ability to participate in the construction and dissemination of knowledge. We focused our attention on tracking the different types of contributions, activities and conversations in learners spontaneous collaborative activities on social networks. The learner profile is not only based on the knowledge extracted from his/her activities on the e-learning system, but also from his/her many activities on social networks. We propose a methodology for exploiting hashtags contained in users’ writings for the automatic generation of learner’s semantic profiles. Hashtags require some processing before being source of knowledge on the user interests. We have defined a method to identify semantics of hashtags and semantic relationships between the meanings of different hashtags. By the way, we have defined the concept of Folksionary, as a hashtags dictionary that for each hashtag clusters its definitions into meanings. Semantized hashtags are thus used to feed the learner’s profile so as to personalize recommendations on learning material. The goal is to build a semantic representation of the activities and interests of learners on social networks in order to enrich their profiles. We also discuss our recommendation approach based on three types of filtering (personalized, social, and statistical interactions with the system). We focus on personalized recommendation of pedagogical resources to the learner according to his/her expectations and profile
1130

A Research Bed For Unit Selection Based Text To Speech Synthesis System

Konakanchi, Parthasarathy 02 1900 (has links) (PDF)
After trying Festival Speech Synthesis System, we decided to develop our own TTS framework, conducive to perform the necessary research experiments for developing good quality TTS for Indian languages. In most of the attempts on Indian language TTS, there is no prosody model, provision for handling foreign language words and no phrase break prediction leading to the possibility of introducing appropriate pauses in the synthesized speech. Further, in the Indian context, there is a real felt need for a bilingual TTS, involving English, along with the Indian language. In fact, it may be desirable to also have a trilingual TTS, which can also take care of the language of the neighboring state or Hindi, in addition. Thus, there is a felt need for a full-fledged TTS development framework, which lends itself for experimentation involving all the above issues and more. This thesis work is therefore such a serious attempt to develop a modular, unit selection based TTS framework. The developed system has been tested for its effectiveness to create intelligible speech in Tamil and Kannada. The created system has also been used to carry out two research experiments on TTS. The first part of the work is the design and development of corpus-based concatenative Tamil speech synthesizer in Matlab and C. A synthesis database has been created with 1027 phonetically rich, pre-recorded sentences, segmented at the phone level. From the sentence to be synthesized, specifications of the required target units are predicted. During synthesis, database units are selected that best match the target specification according to a distance metric and a concatenation quality metric. To accelerate matching, the features of the end frames of the database units have been precomputed and stored. The selected units are concatenated to produce synthetic speech. The high values of the obtained mean opinion scores for the TTS output reveal that speech synthesized using our TTS is intelligible and acceptably natural and can possibly be put to commercial use with some additional features. Experiments carried out by others using my TTS framework have shown that, whenever the required phonetic context is not available in the synthesis database., similar phones that are perceptually indistinguishable may be substituted. The second part of the work deals with the design and modification of the developed TTS framework to be embedded in mobile phones. Commercial GSM FR, EFR and AMR speech codecs are used for compressing our synthesis database. Perception experiments reveal that speech synthesized using a highly compressed database is reasonably natural. This holds promise in the future to read SMSs and emails on mobile phones in Indian languages. Finally, we observe that incorporating prosody and pause models for Indian language TTS would further enhance the quality of the synthetic speech. These are some of the potential, unexplored areas ahead, for research in speech synthesis in Indian languages.

Page generated in 0.1248 seconds