• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 929
  • 156
  • 74
  • 55
  • 27
  • 23
  • 18
  • 13
  • 10
  • 9
  • 8
  • 7
  • 5
  • 5
  • 4
  • Tagged with
  • 1601
  • 1601
  • 1601
  • 622
  • 565
  • 464
  • 383
  • 376
  • 266
  • 256
  • 245
  • 228
  • 221
  • 208
  • 204
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
871

Translating LaTeX to Coq: A Recurrent Neural Network Approach to Formalizing Natural Language Proofs

Carman, Benjamin Andrew 18 May 2021 (has links)
No description available.
872

Categorizing Blog Spam

Bevans, Brandon 01 June 2016 (has links)
The internet has matured into the focal point of our era. Its ecosystem is vast, complex, and in many regards unaccounted for. One of the most prevalent aspects of the internet is spam. Similar to the rest of the internet, spam has evolved from simply meaning ‘unwanted emails’ to a blanket term that encompasses any unsolicited or illegitimate content that appears in the wide range of media that exists on the internet. Many forms of spam permeate the internet, and spam architects continue to develop tools and methods to avoid detection. On the other side, cyber security engineers continue to develop more sophisticated detection tools to curb the harmful effects that come with spam. This virtual arms race has no end in sight. Most efforts thus far have been toward accurately detecting spam from ham, and rightfully so since initial detection is essential. However, research is lacking in understanding the current ecosystem of spam, spam campaigns, and the behavior of the botnets that drive the majority of spam traffic. This thesis focuses on characterizing spam, particularly the spam that appears in forums, where the spam is delivered by bots posing as legitimate users. Forum spam is used primarily to push advertisements or to boost other websites’ perceived popularity by including HTTP links in the content of the post. We conduct an experiment to collect a sample of the blog posts and network activity of the spambots that exist in the internet. We then present a corpora available to conduct analysis on and proceed with our own analysis. We cluster associated groups of users and IP addresses into entities, which we accept as a model of the underlying botnets that interact with our honeypots. We use Natural Language Processing (NLP) and Machine Learning (ML) to determine that creating semantic-based models of botnets are sufficient for distinguishing them from one another. We also find that the syntactic structure of posts has little variation from botnet to botnet. Finally we confirm that to a large degree botnet behavior and content hold across different domains.
873

Počítač jako inteligentní spoluhráč ve slovně-asociační hře Krycí jména / Computer as an Intelligent Partner in the Word-Association Game Codenames

Obrtlík, Petr January 2018 (has links)
This thesis deals with associations between words. Describes the design and implementation of a system that can represent a human in the word-association game Codenames. The system uses the Gensim and FastText libraries to create semantic models. The relationship between words is taught by the analysis of the text corpus CWC-2011.
874

Dolovanie znalostí z textových dát použitím metód umelej inteligencie / Text Mining Based on Artificial Intelligence Methods

Povoda, Lukáš January 2018 (has links)
This work deals with the problem of text mining which is becoming more popular due to exponential growth of the data in electronic form. The work explores contemporary methods and their improvement using optimization methods, as well as the problem of text data understanding in general. The work addresses the problem in three ways: using traditional methods and their optimizations, using Big Data in train phase and abstraction through the minimization of language-dependent parts, and introduction of the new method based on the deep learning which is closer to how human reads and understands text data. The main aim of the dissertation was to propose a method for machine understanding of unstructured text data. The method was experimentally verified by classification of text data on 5 different languages – Czech, English, German, Spanish and Chinese. This demonstrates possible application to different languages families. Validation on the Yelp evaluation database achieve accuracy higher by 0.5% than current methods.
875

Information Extraction From User Generated Noisy Texts

Tabassum Binte Jafar, Jeniya January 2020 (has links)
No description available.
876

Rozpoznávání pojmenovaných entit / Named Entity Recognition

Rylko, Vojtěch January 2014 (has links)
In this master thesis are described the history and theoretical background of named-entity recognition and implementation of the system in C++ for named entity recognition and disambiguation. The system uses local disambiguation method and statistics generated from the  Wikilinks web dataset. With implemented system and with alternative implementations are performed various experiments and tests. These experiments show that the system is sufficiently successful and fast. System participates in the Entity Recognition and Disambiguation Challenge 2014.
877

Représentations vectorielles et apprentissage automatique pour l’alignement d’entités textuelles et de concepts d’ontologie : application à la biologie / Vector Representations and Machine Learning for Alignment of Text Entities with Ontology Concepts : Application to Biology

Ferré, Arnaud 24 May 2019 (has links)
L'augmentation considérable de la quantité des données textuelles rend aujourd’hui difficile leur analyse sans l’assistance d’outils. Or, un texte rédigé en langue naturelle est une donnée non-structurée, c’est-à-dire qu’elle n’est pas interprétable par un programme informatique spécialisé, sans lequel les informations des textes restent largement sous-exploitées. Parmi les outils d’extraction automatique d’information, nous nous intéressons aux méthodes d’interprétation automatique de texte pour la tâche de normalisation d’entité qui consiste en la mise en correspondance automatique des mentions d’entités de textes avec des concepts d’un référentiel. Pour réaliser cette tâche, nous proposons une nouvelle approche par alignement de deux types de représentations vectorielles d’entités capturant une partie de leur sens : les plongements lexicaux pour les mentions textuelles et des “plongements ontologiques” pour les concepts, conçus spécifiquement pour ce travail. L’alignement entre les deux se fait par apprentissage supervisé. Les méthodes développées ont été évaluées avec un jeu de données de référence du domaine biologique et elles représentent aujourd’hui l’état de l’art pour ce jeu de données. Ces méthodes sont intégrées dans une suite logicielle de traitement automatique des langues et les codes sont partagés librement. / The impressive increase in the quantity of textual data makes it difficult today to analyze them without the assistance of tools. However, a text written in natural language is unstructured data, i.e. it cannot be interpreted by a specialized computer program, without which the information in the texts remains largely under-exploited. Among the tools for automatic extraction of information from text, we are interested in automatic text interpretation methods for the entity normalization task that consists in automatically matching text entitiy mentions to concepts in a reference terminology. To accomplish this task, we propose a new approach by aligning two types of vector representations of entities that capture part of their meanings: word embeddings for text mentions and concept embeddings for concepts, designed specifically for this work. The alignment between the two is done through supervised learning. The developed methods have been evaluated on a reference dataset from the biological domain and they now represent the state of the art for this dataset. These methods are integrated into a natural language processing software suite and the codes are freely shared.
878

Cold-start recommendation : from Algorithm Portfolios to Job Applicant Matching / Démarrage à froid en recommandation : des portfolios d'algorithmes à l'appariement automatique d'offres et de chercheurs d'emploi

Gonard, François 31 May 2018 (has links)
La quantité d'informations, de produits et de relations potentielles dans les réseaux sociaux a rendu indispensable la mise à disposition de recommandations personnalisées. L'activité d'un utilisateur est enregistrée et utilisée par des systèmes de recommandation pour apprendre ses centres d'intérêt. Les recommandations sont également utiles lorsqu'estimer la pertinence d'un objet est complexe et repose sur l'expérience. L'apprentissage automatique offre d'excellents moyens de simuler l'expérience par l'emploi de grandes quantités de données.Cette thèse examine le démarrage à froid en recommandation, situation dans laquelle soit un tout nouvel utilisateur désire des recommandations, soit un tout nouvel objet est proposé à la recommandation. En l'absence de données d'intéraction, les recommandations reposent sur des descriptions externes. Deux problèmes de recommandation de ce type sont étudiés ici, pour lesquels des systèmes de recommandation spécialisés pour le démarrage à froid sont présentés.En optimisation, il est possible d'aborder le choix d'algorithme dans un portfolio d'algorithmes comme un problème de recommandation. Notre première contribution concerne un système à deux composants, un sélecteur et un ordonnanceur d'algorithmes, qui vise à réduire le coût de l'optimisation d'une nouvelle instance d'optimisation tout en limitant le risque d'un échec de l'optimisation. Les deux composants sont entrainés sur les données du passé afin de simuler l'expérience, et sont alternativement optimisés afin de les faire coopérer. Ce système a remporté l'Open Algorithm Selection Challenge 2017.L'appariement automatique de chercheurs d'emploi et d'offres est un problème de recommandation très suivi par les plateformes de recrutement en ligne. Une seconde contribution concerne le développement de techniques spécifiques pour la modélisation du langage naturel et leur combinaison avec des techniques de recommandation classiques afin de tirer profit à la fois des intéractions passées des utilisateurs et des descriptions textuelles des annonces. Le problème d'appariement d'offres et de chercheurs d'emploi est étudié à travers le prisme du langage naturel et de la recommandation sur deux jeux de données tirés de contextes réels. Une discussion sur la pertinence des différents systèmes de recommandations pour des applications similaires est proposée. / The need for personalized recommendations is motivated by the overabundance of online information, products, social connections. This typically tackled by recommender systems (RS) that learn users interests from past recorded activities. Another context where recommendation is desirable is when estimating the relevance of an item requires complex reasoning based on experience. Machine learning techniques are good candidates to simulate experience with large amounts of data.The present thesis focuses on the cold-start context in recommendation, i.e. the situation where either a new user desires recommendations or a brand-new item is to be recommended. Since no past interaction is available, RSs have to base their reasoning on side descriptions to form recommendations. Two of such recommendation problems are investigated in this work. Recommender systems designed for the cold-start context are designed.The problem of choosing an optimization algorithm in a portfolio can be cast as a recommendation problem. We propose a two components system combining a per-instance algorithm selector and a sequential scheduler to reduce the optimization cost of a brand-new problem instance and mitigate the risk of optimization failure. Both components are trained with past data to simulate experience, and alternatively optimized to enforce their cooperation. The final system won the Open Algorithm Challenge 2017.Automatic job-applicant matching (JAM) has recently received considerable attention in the recommendation community for applications in online recruitment platforms. We develop specific natural language (NL) modeling techniques and combine them with standard recommendation procedures to leverage past user interactions and the textual descriptions of job positions. The NL and recommendation aspects of the JAM problem are studied on two real-world datasets. The appropriateness of various RSs on applications similar to the JAM problem are discussed.
879

Apprendre par imitation : applications à quelques problèmes d'apprentissage structuré en traitement des langues / Imitation learning : application to several structured learning tasks in natural language processing

Knyazeva, Elena 25 May 2018 (has links)
L’apprentissage structuré est devenu omniprésent dans le traitement automatique des langues naturelles. De nombreuses applications qui font maintenant partie de notre vie telles que des assistants personnels, la traduction automatique, ou encore la reconnaissance vocale, reposent sur ces techniques. Les problèmes d'apprentissage structuré qu’il est nécessaire de résoudre sont de plus en plus complexes et demandent de prendre en compte de plus en plus d’informations à des niveaux linguistiques variés (morphologique, syntaxique, etc.) et reposent la question du meilleurs compromis entre la finesse de la modélisation et l’exactitude des algorithmes d’apprentissage et d’inférence. L’apprentissage par imitation propose de réaliser les procédures d’apprentissage et d’inférence de manière approchée afin de pouvoir exploiter pleinement des structures de dépendance plus riches. Cette thèse explore ce cadre d’apprentissage, en particulier l’algorithme SEARN, à la fois sur le plan théorique ainsi que ses possibilités d’application aux tâches de traitement automatique des langues, notamment aux plus complexes telles que la traduction. Concernant les aspects théoriques, nous présentons un cadre unifié pour les différentes familles d’apprentissage par imitation, qui permet de redériver de manière simple les propriétés de convergence de ces algorithmes; concernant les aspects plus appliqués, nous utilisons l’apprentissage par imitation d’une part pour explorer l’étiquetage de séquences en ordre libre; d’autre part pour étudier des stratégies de décodage en deux étapes pour la traduction automatique. / Structured learning has become ubiquitousin Natural Language Processing; a multitude ofapplications, such as personal assistants, machinetranslation and speech recognition, to name just afew, rely on such techniques. The structured learningproblems that must now be solved are becomingincreasingly more complex and require an increasingamount of information at different linguisticlevels (morphological, syntactic, etc.). It is thereforecrucial to find the best trade-off between the degreeof modelling detail and the exactitude of the inferencealgorithm. Imitation learning aims to perform approximatelearning and inference in order to better exploitricher dependency structures. In this thesis, we explorethe use of this specific learning setting, in particularusing the SEARN algorithm, both from a theoreticalperspective and in terms of the practical applicationsto Natural Language Processing tasks, especiallyto complex tasks such as machine translation.Concerning the theoretical aspects, we introduce aunified framework for different imitation learning algorithmfamilies, allowing us to review and simplifythe convergence properties of the algorithms. With regardsto the more practical application of our work, weuse imitation learning first to experiment with free ordersequence labelling and secondly to explore twostepdecoding strategies for machine translation.
880

Biomedical Information Extraction Pipelines for Public Health in the Age of Deep Learning

January 2019 (has links)
abstract: Unstructured texts containing biomedical information from sources such as electronic health records, scientific literature, discussion forums, and social media offer an opportunity to extract information for a wide range of applications in biomedical informatics. Building scalable and efficient pipelines for natural language processing and extraction of biomedical information plays an important role in the implementation and adoption of applications in areas such as public health. Advancements in machine learning and deep learning techniques have enabled rapid development of such pipelines. This dissertation presents entity extraction pipelines for two public health applications: virus phylogeography and pharmacovigilance. For virus phylogeography, geographical locations are extracted from biomedical scientific texts for metadata enrichment in the GenBank database containing 2.9 million virus nucleotide sequences. For pharmacovigilance, tools are developed to extract adverse drug reactions from social media posts to open avenues for post-market drug surveillance from non-traditional sources. Across these pipelines, high variance is observed in extraction performance among the entities of interest while using state-of-the-art neural network architectures. To explain the variation, linguistic measures are proposed to serve as indicators for entity extraction performance and to provide deeper insight into the domain complexity and the challenges associated with entity extraction. For both the phylogeography and pharmacovigilance pipelines presented in this work the annotated datasets and applications are open source and freely available to the public to foster further research in public health. / Dissertation/Thesis / Doctoral Dissertation Biomedical Informatics 2019

Page generated in 0.1145 seconds