• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 2
  • 1
  • Tagged with
  • 3
  • 3
  • 3
  • 3
  • 3
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • 1
  • 1
  • 1
  • 1
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
1

What Machines Understand about Personality Words after Reading the News

Moyer, Eric David 15 December 2014 (has links)
No description available.
2

Utilização do modelo skip-gram para representação distribuída de palavras no projeto Media Cloud Brasil

Lopes, Evandro Dalbem 30 June 2015 (has links)
Submitted by Evandro Lopes (dalbem.evandro@gmail.com) on 2016-04-04T03:14:32Z No. of bitstreams: 1 dissertacao_skip_gram.pdf: 1559216 bytes, checksum: c9487105e0e9341acd30f549c30d4dc9 (MD5) / Approved for entry into archive by Janete de Oliveira Feitosa (janete.feitosa@fgv.br) on 2016-07-19T19:55:35Z (GMT) No. of bitstreams: 1 dissertacao_skip_gram.pdf: 1559216 bytes, checksum: c9487105e0e9341acd30f549c30d4dc9 (MD5) / Approved for entry into archive by Marcia Bacha (marcia.bacha@fgv.br) on 2016-07-25T17:47:32Z (GMT) No. of bitstreams: 1 dissertacao_skip_gram.pdf: 1559216 bytes, checksum: c9487105e0e9341acd30f549c30d4dc9 (MD5) / Made available in DSpace on 2016-07-25T17:47:47Z (GMT). No. of bitstreams: 1 dissertacao_skip_gram.pdf: 1559216 bytes, checksum: c9487105e0e9341acd30f549c30d4dc9 (MD5) Previous issue date: 2015-06-30 / There is a representation problem when working with natural language processing because once the traditional model of bag-of-words represents the documents and words as single matrix, this one tends to be completely sparse. In order to deal with this problem, there are some methods capable of represent the words using a distributed representation, with a smaller dimension and more compact, including some properties that allow to relate words on the semantic form. The aim of this work is to use a dataset obtained by the Media Cloud Brasil project and apply the skip-gram model to explore relations and search for pattern that helps to understand the content. / Existe um problema de representação em processamento de linguagem natural, pois uma vez que o modelo tradicional de bag-of-words representa os documentos e as palavras em uma unica matriz, esta tende a ser completamente esparsa. Para lidar com este problema, surgiram alguns métodos que são capazes de representar as palavras utilizando uma representação distribuída, em um espaço de dimensão menor e mais compacto, inclusive tendo a propriedade de relacionar palavras de forma semântica. Este trabalho tem como objetivo utilizar um conjunto de documentos obtido através do projeto Media Cloud Brasil para aplicar o modelo skip-gram em busca de explorar relações e encontrar padrões que facilitem na compreensão do conteúdo.
3

An Exploration of the Word2vec Algorithm: Creating a Vector Representation of a Language Vocabulary that Encodes Meaning and Usage Patterns in the Vector Space Structure

Le, Thu Anh 05 1900 (has links)
This thesis is an exloration and exposition of a highly efficient shallow neural network algorithm called word2vec, which was developed by T. Mikolov et al. in order to create vector representations of a language vocabulary such that information about the meaning and usage of the vocabulary words is encoded in the vector space structure. Chapter 1 introduces natural language processing, vector representations of language vocabularies, and the word2vec algorithm. Chapter 2 reviews the basic mathematical theory of deterministic convex optimization. Chapter 3 provides background on some concepts from computer science that are used in the word2vec algorithm: Huffman trees, neural networks, and binary cross-entropy. Chapter 4 provides a detailed discussion of the word2vec algorithm itself and includes a discussion of continuous bag of words, skip-gram, hierarchical softmax, and negative sampling. Finally, Chapter 5 explores some applications of vector representations: word categorization, analogy completion, and language translation assistance.

Page generated in 0.0724 seconds