• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 857
  • 186
  • 86
  • 59
  • 34
  • 24
  • 16
  • 12
  • 11
  • 10
  • 8
  • 6
  • 5
  • 4
  • 4
  • Tagged with
  • 1602
  • 1602
  • 1387
  • 558
  • 525
  • 436
  • 357
  • 344
  • 242
  • 228
  • 220
  • 217
  • 211
  • 206
  • 195
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
251

Answering Deep Queries Specified in Natural Language with Respect to a Frame Based Knowledge Base and Developing Related Natural Language Understanding Components

January 2015 (has links)
abstract: Question Answering has been under active research for decades, but it has recently taken the spotlight following IBM Watson's success in Jeopardy! and digital assistants such as Apple's Siri, Google Now, and Microsoft Cortana through every smart-phone and browser. However, most of the research in Question Answering aims at factual questions rather than deep ones such as ``How'' and ``Why'' questions. In this dissertation, I suggest a different approach in tackling this problem. We believe that the answers of deep questions need to be formally defined before found. Because these answers must be defined based on something, it is better to be more structural in natural language text; I define Knowledge Description Graphs (KDGs), a graphical structure containing information about events, entities, and classes. We then propose formulations and algorithms to construct KDGs from a frame-based knowledge base, define the answers of various ``How'' and ``Why'' questions with respect to KDGs, and suggest how to obtain the answers from KDGs using Answer Set Programming. Moreover, I discuss how to derive missing information in constructing KDGs when the knowledge base is under-specified and how to answer many factual question types with respect to the knowledge base. After having the answers of various questions with respect to a knowledge base, I extend our research to use natural language text in specifying deep questions and knowledge base, generate natural language text from those specification. Toward these goals, I developed NL2KR, a system which helps in translating natural language to formal language. I show NL2KR's use in translating ``How'' and ``Why'' questions, and generating simple natural language sentences from natural language KDG specification. Finally, I discuss applications of the components I developed in Natural Language Understanding. / Dissertation/Thesis / Doctoral Dissertation Computer Science 2015
252

Tell me why : uma arquitetura para fornecer explicações sobre revisões / Tell me why : an architecture to provide rich review explanations

Woloszyn, Vinicius January 2015 (has links)
O que as outras pessoas pensam sempre foi uma parte importante do processo de tomada de decisão. Por exemplo, as pessoas costumam consultar seus amigos para obter um parecer sobre um livro ou um filme ou um restaurante. Hoje em dia, os usuários publicam suas opiniões em sites de revisão colaborativa, como IMDB para filmes, Yelp para restaurantes e TripAdiviser para hotéis. Ao longo do tempo, esses sites têm construído um enorme banco de dados que conecta usuários, artigos e opiniões expressas por uma classificação numérica e um comentário de texto livre que explicam por que eles gostam ou não gostam de um item. Mas essa vasta quantidade de dados pode prejudicar o usuário a obter uma opinião. Muitos trabalhos relacionados fornecem uma interpretações de revisões para os usuários. Eles oferecem vantagens diferentes para vários tipos de resumos. No entanto, todos eles têm a mesma limitação: eles não fornecem resumos personalizados nem contrastantes comentários escritos por diferentes segmentos de colaboradores. Compreeder e contrastar comentários escritos por diferentes segmentos de revisores ainda é um problema de pesquisa em aberto. Assim, nosso trabalho propõe uma nova arquitetura, chamado Tell Me Why. TMW é um projeto desenvolvido no Laboratório de Informática Grenoble em cooperação com a Universidade Federal do Rio Grande do Sul para fornecer aos usuários uma melhor compreensão dos comentários. Propomos uma combinação de análise de texto a partir de comentários com a mineração de dados estruturado resultante do cruzamento de dimensões do avaliador e item. Além disso, este trabalho realiza uma investigação sobre métodos de sumarização utilizados na revisão de produtos. A saída de nossa arquitetura consiste em declarações personalizadas de texto usando Geração de Linguagem Natural composto por atributos de itens e comentários resumidos que explicam a opinião das pessoas sobre um determinado assunto. Os resultados obtidos a partir de uma avaliação comparativa com a Revisão Mais Útil da Amazon revelam que é uma abordagem promissora e útil na opinião do usuário. / What other people think has been always an important part of the process of decision-making. For instance, people usually consult their friends to get an opinion about a book, or a movie or a restaurant. Nowadays, users publish their opinions on collaborative reviewing sites such as IMDB for movies, Yelp for restaurants and TripAdvisor for hotels. Over the time, these sites have built a massive database that connects users, items and opinions expressed by a numeric rating and a free text review that explain why they like or dislike a specific item. But this vast amount of data can hamper the user to get an opinion. Several related work provide a review interpretations to the users. They offer different advantages for various types of summaries. However, they all have the same limitation: they do not provide personalized summaries nor contrasting reviews written by different segments of reviewers. Understanding and contrast reviews written by different segments of reviewers is still an open research problem. Our work proposes a new architecture, called Tell Me Why, which is a project developed at Grenoble Informatics Laboratory in cooperation with Federal University of Rio Grande do Sul to provide users a better understanding of reviews. We propose a combination of text analysis from reviews with mining structured data resulting from crossing reviewer and item dimensions. Additionally, this work performs an investigation of summarization methods utilized in review domain. The output of our architecture consists of personalized statement using Natural Language Generation that explain people’s opinion about a particular item. The evaluation reveal that it is a promising approach and useful in user’s opinion.
253

AquisiÃÃo de Conhecimento de Mundo para Sistemas de Processamento de Linguagem Natural / World of Knowledge Acquisition for Systems of Natural Language Processing

Josà Wellington Franco da Silva 30 August 2013 (has links)
CoordenaÃÃo de AperfeiÃoamento de Pessoal de NÃvel Superior / Um dos desafios das pesquisas na Ãrea de Processamento de Linguagem Natural (PLN) à prover recursos semÃntico-linguÃsticos que expressem conhecimento de mundo para suportar tarefas como: extraÃÃo de informaÃÃo, recuperaÃÃo de informaÃÃo, sistemas de perguntas e respostas, sumarizaÃÃo de textos, anotaÃÃo semÃntica de textos, dentre outras. Para esse desafio este trabalho propÃe estratÃgias para aquisiÃÃo de conhecimento de mundo. Propomos dois mÃtodos. O primeiro à um mÃtodo semiautomÃtico que tem como ideia principal utilizar um processo de raciocÃnio semÃntico sobre o conhecimento prÃ-existente em uma base semÃntica. O segundo à um mÃtodo de aquisiÃÃo automÃtica que utiliza a WikipÃdia para a geraÃÃo de conteÃdo semÃntico. A WikipÃdia foi utilizada como fonte de conhecimento devido à confiabilidade, dinamicidade e abrangÃncia de seu conteÃdo. Neste trabalho propomos um mÃtodo para aquisiÃÃo de relaÃÃes semÃnticas entre conceitos a partir de textos de artigos da WikipÃdia que faz uso de um conhecimento implÃcito existente na WikipÃdia e em sistemas hipermÃdia: os links entre artigos. Ao longo do texto descritivo de um artigo da WikipÃdia aparecem links para outros artigos que sÃo evidÃncias de que hà uma relaÃÃo entre o artigo corrente e o outro artigo referenciado pelo link. O mÃtodo proposto objetiva capturar a relaÃÃo semÃntica expressa no texto entre eles (artigo corrente e link para outro artigo), sem expressÃes regulares identificando relaÃÃes similares atravÃs de uma medida de similaridade semÃntica. / One of the challenges of research in Natural Language Processing(NLP) is to provide semantic and linguistic resources to express knowledge of the world to support tasks such as Information Extraction, Information Retrieval systems, Questions & Answering, Text Summarization, Annotation Semantics of texts, etc. For this challenge this work proposes strategies for acquiring knowledge of the world. We propose two methods. The first is a semi-automatic method that has main idea of using a semantic reasoning process on pre-existing knowledge base semantics. The second is an acquisition method that utilizes automatic Wikipedia for generating semantical content. Wikipedia was used as a source of knowledge because of the reliability, dynamism and scope of its content. In this work we propose a method for acquiring semantic relations between concepts from the texts of Wikipedia articles that makes use of an implicit knowledge that exists in Wikipedia and in hypermedia systems: links between articles. Throughout the descriptive text of a Wikipedia article appear links to other articles that are evidence that there is a relationship between the current article and another article referenced by the link. The proposed method aims to capture the semantic relationship expressed in the text between them (current article and link to another article), no regular expressions identifying similar relationships through a semantic similarity measure.
254

Tell me why : uma arquitetura para fornecer explicações sobre revisões / Tell me why : an architecture to provide rich review explanations

Woloszyn, Vinicius January 2015 (has links)
O que as outras pessoas pensam sempre foi uma parte importante do processo de tomada de decisão. Por exemplo, as pessoas costumam consultar seus amigos para obter um parecer sobre um livro ou um filme ou um restaurante. Hoje em dia, os usuários publicam suas opiniões em sites de revisão colaborativa, como IMDB para filmes, Yelp para restaurantes e TripAdiviser para hotéis. Ao longo do tempo, esses sites têm construído um enorme banco de dados que conecta usuários, artigos e opiniões expressas por uma classificação numérica e um comentário de texto livre que explicam por que eles gostam ou não gostam de um item. Mas essa vasta quantidade de dados pode prejudicar o usuário a obter uma opinião. Muitos trabalhos relacionados fornecem uma interpretações de revisões para os usuários. Eles oferecem vantagens diferentes para vários tipos de resumos. No entanto, todos eles têm a mesma limitação: eles não fornecem resumos personalizados nem contrastantes comentários escritos por diferentes segmentos de colaboradores. Compreeder e contrastar comentários escritos por diferentes segmentos de revisores ainda é um problema de pesquisa em aberto. Assim, nosso trabalho propõe uma nova arquitetura, chamado Tell Me Why. TMW é um projeto desenvolvido no Laboratório de Informática Grenoble em cooperação com a Universidade Federal do Rio Grande do Sul para fornecer aos usuários uma melhor compreensão dos comentários. Propomos uma combinação de análise de texto a partir de comentários com a mineração de dados estruturado resultante do cruzamento de dimensões do avaliador e item. Além disso, este trabalho realiza uma investigação sobre métodos de sumarização utilizados na revisão de produtos. A saída de nossa arquitetura consiste em declarações personalizadas de texto usando Geração de Linguagem Natural composto por atributos de itens e comentários resumidos que explicam a opinião das pessoas sobre um determinado assunto. Os resultados obtidos a partir de uma avaliação comparativa com a Revisão Mais Útil da Amazon revelam que é uma abordagem promissora e útil na opinião do usuário. / What other people think has been always an important part of the process of decision-making. For instance, people usually consult their friends to get an opinion about a book, or a movie or a restaurant. Nowadays, users publish their opinions on collaborative reviewing sites such as IMDB for movies, Yelp for restaurants and TripAdvisor for hotels. Over the time, these sites have built a massive database that connects users, items and opinions expressed by a numeric rating and a free text review that explain why they like or dislike a specific item. But this vast amount of data can hamper the user to get an opinion. Several related work provide a review interpretations to the users. They offer different advantages for various types of summaries. However, they all have the same limitation: they do not provide personalized summaries nor contrasting reviews written by different segments of reviewers. Understanding and contrast reviews written by different segments of reviewers is still an open research problem. Our work proposes a new architecture, called Tell Me Why, which is a project developed at Grenoble Informatics Laboratory in cooperation with Federal University of Rio Grande do Sul to provide users a better understanding of reviews. We propose a combination of text analysis from reviews with mining structured data resulting from crossing reviewer and item dimensions. Additionally, this work performs an investigation of summarization methods utilized in review domain. The output of our architecture consists of personalized statement using Natural Language Generation that explain people’s opinion about a particular item. The evaluation reveal that it is a promising approach and useful in user’s opinion.
255

Natural Language Interfaces to Databases

Chandra, Yohan 12 1900 (has links)
Natural language interfaces to databases (NLIDB) are systems that aim to bridge the gap between the languages used by humans and computers, and automatically translate natural language sentences to database queries. This thesis proposes a novel approach to NLIDB, using graph-based models. The system starts by collecting as much information as possible from existing databases and sentences, and transforms this information into a knowledge base for the system. Given a new question, the system will use this knowledge to analyze and translate the sentence into its corresponding database query statement. The graph-based NLIDB system uses English as the natural language, a relational database model, and SQL as the formal query language. In experiments performed with natural language questions ran against a large database containing information about U.S. geography, the system showed good performance compared to the state-of-the-art in the field.
256

memeBot: Automatic Image Meme Generation for Online Social Interaction

January 2020 (has links)
abstract: Internet memes have become a widespread tool used by people for interacting and exchanging ideas over social media, blogs, and open messengers. Internet memes most commonly take the form of an image which is a combination of image, text, and humor, making them a powerful tool to deliver information. Image memes are used in viral marketing and mass advertising to propagate any ideas ranging from simple commercials to those that can cause changes and development in the social structures like countering hate speech. This work proposes to treat automatic image meme generation as a translation process, and further present an end to end neural and probabilistic approach to generate an image-based meme for any given sentence using an encoder-decoder architecture. For a given input sentence, a meme is generated by combining a meme template image and a text caption where the meme template image is selected from a set of popular candidates using a selection module and the meme caption is generated by an encoder-decoder model. An encoder is used to map the selected meme template and the input sentence into a meme embedding space and then a decoder is used to decode the meme caption from the meme embedding space. The generated natural language caption is conditioned on the input sentence and the selected meme template. The model learns the dependencies between the meme captions and the meme template images and generates new memes using the learned dependencies. The quality of the generated captions and the generated memes is evaluated through both automated metrics and human evaluation. An experiment is designed to score how well the generated memes can represent popular tweets from Twitter conversations. Experiments on Twitter data show the efficacy of the model in generating memes capable of representing a sentence in online social interaction. / Dissertation/Thesis / Masters Thesis Computer Science 2020
257

Utterance Abstraction and Response Diversity for Open-Domain Dialogue Systems / オープンドメイン対話システムにおける発話の抽象化と応答の多様性

ZHAO, TIANYU 23 September 2020 (has links)
京都大学 / 0048 / 新制・課程博士 / 博士(情報学) / 甲第22799号 / 情博第729号 / 新制||情||125(附属図書館) / 京都大学大学院情報学研究科知能情報学専攻 / (主査)教授 河原 達也, 教授 黒橋 禎夫, 教授 森 信介 / 学位規則第4条第1項該当 / Doctor of Informatics / Kyoto University / DFAM
258

DECEPTIVE REVIEW IDENTIFICATION VIA REVIEWER NETWORK REPRESENTATION LEARNING

Shih-Feng Yang (11502553) 19 December 2021 (has links)
<div><div>With the growth of the popularity of e-commerce and mobile apps during the past decade, people rely on online reviews more than ever before for purchasing products, booking hotels, and choosing all kinds of services. Users share their opinions by posting product reviews on merchant sites or online review websites (e.g., Yelp, Amazon, TripAdvisor). Although online reviews are valuable information for people who are interested in products and services, many reviews are manipulated by spammers to provide untruthful information for business competition. Since deceptive reviews can damage the reputation of brands and mislead customers’ buying behaviors, the identification of fake reviews has become an important topic for online merchants. Among the computational approaches proposed for fake review identification, network-based fake review analysis jointly considers the information from review text, reviewer behaviors, and production information. Researchers have proposed network-based methods (e.g., metapath) on heterogeneous networks, which have shown promising results.</div><div><br></div><div>However, we’ve identified two research gaps in this study: 1) We argue the previous network-based reviewer representations are not sufficient to preserve the relationship of reviewers in networks. To be specific, previous studies only considered first-order proximity, which indicates the observable connection between reviewers, but not second-order proximity, which captures the neighborhood structures where two vertices overlap. Moreover, although previous network-based fake review studies (e.g., metapath) connect reviewers through feature nodes across heterogeneous networks, they ignored the multi-view nature of reviewers. A view is derived from a single type of proximity or relationship between the nodes, which can be characterized by a set of edges. In other words, the reviewers could form different networks with regard to different relationships. 2) The text embeddings of reviews in previous network-based fake review studies were not considered with reviewer embeddings.</div><div><br></div><div>To tackle the first gap, we generated reviewer embeddings via MVE (Qu et al., 2017), a framework for multi-view network representation learning, and conducted spammer classification experiments to examine the effectiveness of the learned embeddings for distinguishing spammers and non-spammers. In addition, we performed unsupervised hierarchical clustering to observe the clusters of the reviewer embeddings. Our results show the clusters generated based on reviewer embeddings capture the difference between spammers and non-spammers better than those generated based on reviewers’ features.</div><div><br></div><div>To fill the second gap, we proposed hybrid embeddings that combine review text embeddings with reviewer embeddings (i.e., the vector that represents a reviewer’s characteristics, such as writing or behavioral patterns). We conducted fake review classification experiments to compare the performance between using hybrid embeddings (i.e., text+reviewer) as features and using text-only embeddings as features. Our results suggest that hybrid embedding is more effective than text-only embedding for fake review identification. Moreover, we compared the prediction performance of the hybrid embeddings with baselines and showed our approach outperformed others on fake review identification experiments.</div><div><br></div><div>The contributions of this study are four-fold: 1) We adopted a multi-view representation learning approach for reviewer embedding learning and analyze the efficacy of the embeddings used for spammer classification and fake review classification. 2) We proposed a hybrid embedding that considers the characteristics of both review text and the reviewer. Our results are promising and suggest hybrid embedding is very effective for fake review identification. 3) We proposed a heuristic network construction approach that builds a user network based on user features. 4) We evaluated how different spammer thresholds impact the performance of fake review classification. Several studies have used the same datasets as we used in this study, but most of them followed the spammer definition mentioned by Jindal and Liu (2008). We argued that the spammer definition should be configurable based on different datasets. Our findings showed that by carefully choosing the spammer thresholds for the target datasets, hybrid embeddings have higher efficacy for fake review classification.</div></div>
259

Konzeption eines dreistufigen Transfers für die maschinelle Übersetzung natürlicher Sprachen

Laube, Annett, Karl, Hans-Ulrich 14 December 2012 (has links)
0 VORWORT Die für die Übersetzung von Programmiersprachen benötigten Analyse- und Synthesealgorithmen können bereits seit geraumer Zeit relativ gut sprachunabhängig formuliert werden. Dies findet seinen Ausdruck unter anderem in einer Vielzahl von Generatoren, die den Übersetzungsproze? ganz oder teilweise automatisieren lassen. Die Syntax der zu verarbeitenden Sprache steht gewöhnlich in Datenform (Graphen, Listen) auf der Basis formaler Beschreibungsmittel (z.B. BNF) zur Verfügung. Im Bereich der Übersetzung natürlicher Sprachen ist die Trennung von Sprache und Verarbeitungsalgorithmen - wenn überhaupt - erst ansatzweise vollzogen. Die Gründe liegen auf der Hand. Natürliche Sprachen sind mächtiger, ihre formale Darstellung schwierig. Soll die Übersetzung auch die mündliche Kommunikation umfassen, d.h. den menschlichen Dolmetscher auf einer internationalen Konferenz oder beim Telefonieren mit einem Partner, der eine andere Sprache spricht, ersetzen, kommen Echtzeitanforderungen dazu, die dazu zwingen werden, hochparallele Ansätze zu verfolgen. Der Prozess der Übersetzung ist auch dann, wenn keine Echtzeiterforderungen vorliegen, außerordentlich komplex. Lösungen werden mit Hilfe des Interlingua- und des Transferansatzes gesucht. Verstärkt werden dabei formale Beschreibungsmittel realtiv gut erforschter Teilgebiete der Informatik eingesetzt (Operationen über dekorierten Bäumen, Baum-zu-Baum-Übersetzungsstrategien), von denen man hofft, daß die Ergebnisse weiter führen werden als spektakuläre Prototypen, die sich jetzt schon am Markt befinden und oft aus heuristischen Ansätzen abgeleitet sind. [...]:0 Vorwort S. 2 1 Einleitung 2. 4 2 Die Komponenten des dreistufigen Transfers S. 5 3 Formalisierung der Komposition S. 8 4 Pre-Transfer-Phase S. 11 5 Formalisierung der Pre-Transfer-Phase S. 13 6 Transfer-Phase S. 18 7 Formalisierung der Transfer-Phase S. 20 8 Post-Transfer-Phase S. 24 9 Transfer-Beispiel S. 25 10 Zusammenfassung S. 29
260

JOKE RECOMMENDER SYSTEM USING HUMOR THEORY

Soumya Agrawal (9183053) 29 July 2020 (has links)
<p>The fact that every individual has a different sense of humor and it varies greatly from one person to another means that it is a challenge to learn any individual’s humor preferences. Humor is much more than just a source of entertainment; it is an essential tool that aids communication. Understanding humor preferences can lead to improved social interactions and bridge existing social or economic gaps.</p><p> </p><p>In this study, we propose a methodology that aims to develop a recommendation system for jokes by analyzing its text. Various researchers have proposed different theories of humor depending on their area of focus. This exploratory study focuses mainly on Attardo and Raskin’s (1991) General Theory of Verbal Humor and implements the knowledge resources defined by it to annotate the jokes. These annotations contain the characteristics of the jokes and also play an important role in determining how alike these jokes are. We use Lin’s similarity metric (Lin, 1998) to computationally capture this similarity. The jokes are clustered in a hierarchical fashion based on their similarity values used for the recommendation. We also compare our joke recommendations to those obtained by the Eigenstate algorithm (Goldberg, Roeder, Gupta, & Perkins, 2001), an existing joke recommendation system that does not consider the content of the joke in its recommendation.</p>

Page generated in 0.0619 seconds