• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 124
  • 39
  • 5
  • 5
  • 4
  • 3
  • 2
  • 2
  • 2
  • 1
  • 1
  • 1
  • 1
  • Tagged with
  • 207
  • 207
  • 90
  • 78
  • 66
  • 62
  • 48
  • 47
  • 45
  • 42
  • 35
  • 28
  • 25
  • 23
  • 23
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
141

[en] MACHINE LEARNING FOR SENTIMENT CLASSIFICATION / [pt] APRENDIZADO DE MÁQUINA PARA O PROBLEMA DE SENTIMENT CLASSIFICATION

PEDRO OGURI 18 May 2007 (has links)
[pt] Sentiment Analysis é um problema de categorização de texto no qual deseja-se identificar opiniões favoráveis e desfavoráveis com relação a um tópico. Um exemplo destes tópicos de interesse são organizações e seus produtos. Neste problema, documentos são classificados pelo sentimento, conotação, atitudes e opiniões ao invés de se restringir aos fatos descritos neste. O principal desafio em Sentiment Classification é identificar como sentimentos são expressados em textos e se tais sentimentos indicam uma opinião positiva (favorável) ou negativa (desfavorável) com relação a um tópico. Devido ao crescente volume de dados disponível na Web, onde todos tendem a ser geradores de conteúdo e expressarem opiniões sobre os mais variados assuntos, técnicas de Aprendizado de Máquina vem se tornando cada vez mais atraentes. Nesta dissertação investigamos métodos de Aprendizado de Máquina para Sentiment Analysis. Apresentamos alguns modelos de representação de documentos como saco de palavras e N-grama. Testamos os classificadores SVM (Máquina de Vetores Suporte) e Naive Bayes com diferentes modelos de representação textual e comparamos seus desempenhos. / [en] Sentiment Analysis is a text categorization problem in which we want to identify favorable and unfavorable opinions towards a given topic. Examples of such topics are organizations and its products. In this problem, docu- ments are classifed according to their sentiment, connotation, attitudes and opinions instead of being limited to the facts described in it. The main challenge in Sentiment Classification is identifying how sentiments are expressed in texts and whether they indicate a positive (favorable) or negative (unfavorable) opinion towards a topic. Due to the growing volume of information available online in an environment where we all tend to be content generators and express opinions on a variety of subjects, Machine Learning techniques have become more and more attractive. In this dissertation, we investigate Machine Learning methods applied to Sentiment Analysis. We present document representation models such as bag-of-words and N-grams.We compare the performance of the Naive Bayes and the Support Vector Machine classifiers for each proposed model
142

Explorer et apprendre à partir de collections de textes multilingues à l'aide des modèles probabilistes latents et des réseaux profonds / Mining and learning from multilingual text collections using topic models and word embeddings

Balikas, Georgios 20 October 2017 (has links)
Le texte est l'une des sources d'informations les plus répandues et les plus persistantes. L'analyse de contenu du texte se réfère à des méthodes d'étude et de récupération d'informations à partir de documents. Aujourd'hui, avec une quantité de texte disponible en ligne toujours croissante l'analyse de contenu du texte revêt une grande importance parce qu' elle permet une variété d'applications. À cette fin, les méthodes d'apprentissage de la représentation sans supervision telles que les modèles thématiques et les word embeddings constituent des outils importants.L'objectif de cette dissertation est d'étudier et de relever des défis dans ce domaine.Dans la première partie de la thèse, nous nous concentrons sur les modèles thématiques et plus précisément sur la manière d'incorporer des informations antérieures sur la structure du texte à ces modèles.Les modèles de sujets sont basés sur le principe du sac-de-mots et, par conséquent, les mots sont échangeables. Bien que cette hypothèse profite les calculs des probabilités conditionnelles, cela entraîne une perte d'information.Pour éviter cette limitation, nous proposons deux mécanismes qui étendent les modèles de sujets en intégrant leur connaissance de la structure du texte. Nous supposons que les documents sont répartis dans des segments de texte cohérents. Le premier mécanisme attribue le même sujet aux mots d'un segment. La seconde, capitalise sur les propriétés de copulas, un outil principalement utilisé dans les domaines de l'économie et de la gestion des risques, qui sert à modéliser les distributions communes de densité de probabilité des variables aléatoires tout en n'accédant qu'à leurs marginaux.La deuxième partie de la thèse explore les modèles de sujets bilingues pour les collections comparables avec des alignements de documents explicites. En règle générale, une collection de documents pour ces modèles se présente sous la forme de paires de documents comparables. Les documents d'une paire sont écrits dans différentes langues et sont thématiquement similaires. À moins de traductions, les documents d'une paire sont semblables dans une certaine mesure seulement. Pendant ce temps, les modèles de sujets représentatifs supposent que les documents ont des distributions thématiques identiques, ce qui constitue une hypothèse forte et limitante. Pour le surmonter, nous proposons de nouveaux modèles thématiques bilingues qui intègrent la notion de similitude interlingue des documents qui constituent les paires dans leurs processus générateurs et d'inférence.La dernière partie de la thèse porte sur l'utilisation d'embeddings de mots et de réseaux de neurones pour trois applications d'exploration de texte. Tout d'abord, nous abordons la classification du document polylinguistique où nous soutenons que les traductions d'un document peuvent être utilisées pour enrichir sa représentation. À l'aide d'un codeur automatique pour obtenir ces représentations de documents robustes, nous démontrons des améliorations dans la tâche de classification de documents multi-classes. Deuxièmement, nous explorons la classification des tweets à plusieurs tâches en soutenant que, en formant conjointement des systèmes de classification utilisant des tâches corrélées, on peut améliorer la performance obtenue. À cette fin, nous montrons comment réaliser des performances de pointe sur une tâche de classification du sentiment en utilisant des réseaux neuronaux récurrents. La troisième application que nous explorons est la récupération d'informations entre langues. Compte tenu d'un document écrit dans une langue, la tâche consiste à récupérer les documents les plus similaires à partir d'un ensemble de documents écrits dans une autre langue. Dans cette ligne de recherche, nous montrons qu'en adaptant le problème du transport pour la tâche d'estimation des distances documentaires, on peut obtenir des améliorations importantes. / Text is one of the most pervasive and persistent sources of information. Content analysis of text in its broad sense refers to methods for studying and retrieving information from documents. Nowadays, with the ever increasing amounts of text becoming available online is several languages and different styles, content analysis of text is of tremendous importance as it enables a variety of applications. To this end, unsupervised representation learning methods such as topic models and word embeddings constitute prominent tools.The goal of this dissertation is to study and address challengingproblems in this area, focusing on both the design of novel text miningalgorithms and tools, as well as on studying how these tools can be applied to text collections written in a single or several languages.In the first part of the thesis we focus on topic models and more precisely on how to incorporate prior information of text structure to such models.Topic models are built on the premise of bag-of-words, and therefore words are exchangeable. While this assumption benefits the calculations of the conditional probabilities it results in loss of information.To overcome this limitation we propose two mechanisms that extend topic models by integrating knowledge of text structure to them. We assume that the documents are partitioned in thematically coherent text segments. The first mechanism assigns the same topic to the words of a segment. The second, capitalizes on the properties of copulas, a tool mainly used in the fields of economics and risk management that is used to model the joint probability density distributions of random variables while having access only to their marginals.The second part of the thesis explores bilingual topic models for comparable corpora with explicit document alignments. Typically, a document collection for such models is in the form of comparable document pairs. The documents of a pair are written in different languages and are thematically similar. Unless translations, the documents of a pair are similar to some extent only. Meanwhile, representative topic models assume that the documents have identical topic distributions, which is a strong and limiting assumption. To overcome it we propose novel bilingual topic models that incorporate the notion of cross-lingual similarity of the documents that constitute the pairs in their generative and inference processes. Calculating this cross-lingual document similarity is a task on itself, which we propose to address using cross-lingual word embeddings.The last part of the thesis concerns the use of word embeddings and neural networks for three text mining applications. First, we discuss polylingual document classification where we argue that translations of a document can be used to enrich its representation. Using an auto-encoder to obtain these robust document representations we demonstrate improvements in the task of multi-class document classification. Second, we explore multi-task sentiment classification of tweets arguing that by jointly training classification systems using correlated tasks can improve the obtained performance. To this end we show how can achieve state-of-the-art performance on a sentiment classification task using recurrent neural networks. The third application we explore is cross-lingual information retrieval. Given a document written in one language, the task consists in retrieving the most similar documents from a pool of documents written in another language. In this line of research, we show that by adapting the transportation problem for the task of estimating document distances one can achieve important improvements.
143

[en] STOCK MARKET BEHAVIOR PREDICTION USING FINANCIAL NEWS IN PORTUGUESE / [pt] PREDIÇÃO DO COMPORTAMENTO DO MERCADO FINANCEIRO UTILIZANDO NOTÍCIAS EM PORTUGUÊS

HERALDO PIMENTA BORGES FILHO 27 August 2015 (has links)
[pt] Um conjunto de teorias financeiras, tais como a hipótese do mercado eficiente e a teoria do passeio aleatório, afirma ser impossível prever o futuro do mercado de ações baseado na informação atualmente disponível. Entretanto, pesquisas recentes têm provado o contrário ao constatar uma relação entre o conteúdo de uma notícia corrente e o comportamento de um ativo. Nosso objetivo é projetar e implementar um algoritmo de predição que utiliza notícias jornalísticas sobre empresas de capital aberto para prever o comportamento de ações na bolsa de valores. Utilizamos uma abordagem baseada em aprendizado de máquina para a tarefa de predição do comportamento de um ativo nas posições de alta, baixa ou neutra, utilizando informações quantitativas e qualitativas, como notícias sobre o mercado financeiro. Avaliamos o nosso sistema em um dataset com seis mil notícias e nossos experimentos apresentam uma acurácia de 68.57 porcento para a tarefa. / [en] A set of financial theories, such as the eficient market hypothesis and the theory of random walk, says it is impossible to predict the future of the stock market based on currently available information. However, recent research has proven otherwise by finding a relationship between the content of a news and current behavior of an stock. Our goal is to develop and implement a prediction algorithm that uses financial news about joint-stock company to predict the stock s behavior on the stock exchange. We use an approach based on machine learning for the task of predicting the behavior of an stock in positions of up, down or neutral, using quantitative and qualitative information, such as financial. We evaluate our system on a dataset with six thousand news and our experiments indicate an accuracy of 68.57 percent for the task.
144

A Semantic Triplet Based Story Classifier

January 2013 (has links)
abstract: Text classification, in the artificial intelligence domain, is an activity in which text documents are automatically classified into predefined categories using machine learning techniques. An example of this is classifying uncategorized news articles into different predefined categories such as "Business", "Politics", "Education", "Technology" , etc. In this thesis, supervised machine learning approach is followed, in which a module is first trained with pre-classified training data and then class of test data is predicted. Good feature extraction is an important step in the machine learning approach and hence the main component of this text classifier is semantic triplet based features in addition to traditional features like standard keyword based features and statistical features based on shallow-parsing (such as density of POS tags and named entities). Triplet {Subject, Verb, Object} in a sentence is defined as a relation between subject and object, the relation being the predicate (verb). Triplet extraction process, is a 5 step process which takes input corpus as a web text document(s), each consisting of one or many paragraphs, from RSS feeds to lists of extremist website. Input corpus feeds into the "Pronoun Resolution" step, which uses an heuristic approach to identify the noun phrases referenced by the pronouns. The next step "SRL Parser" is a shallow semantic parser and converts the incoming pronoun resolved paragraphs into annotated predicate argument format. The output of SRL parser is processed by "Triplet Extractor" algorithm which forms the triplet in the form {Subject, Verb, Object}. Generalization and reduction of triplet features is the next step. Reduced feature representation reduces computing time, yields better discriminatory behavior and handles curse of dimensionality phenomena. For training and testing, a ten- fold cross validation approach is followed. In each round SVM classifier is trained with 90% of labeled (training) data and in the testing phase, classes of remaining 10% unlabeled (testing) data are predicted. Concluding, this paper proposes a model with semantic triplet based features for story classification. The effectiveness of the model is demonstrated against other traditional features used in the literature for text classification tasks. / Dissertation/Thesis / M.S. Computer Science 2013
145

"Classificação de páginas na internet" / "Internet pages classification"

José Martins Júnior 11 April 2003 (has links)
O grande crescimento da Internet ocorreu a partir da década de 1990 com o surgimento dos provedores comerciais de serviços, e resulta principalmente da boa aceitação e vasta disseminação do uso da Web. O grande problema que afeta a escalabilidade e o uso de tal serviço refere-se à organização e à classificação de seu conteúdo. Os engenhos de busca atuais possibilitam a localização de páginas na Web pela comparação léxica de conjuntos de palavras perante os conteúdos dos hipertextos. Tal mecanismo mostra-se ineficaz quando da necessidade pela localização de conteúdos que expressem conceitos ou objetos, a exemplo de produtos à venda oferecidos em sites de comércio eletrônico. A criação da Web Semântica foi anunciada no ano de 2000 para esse propósito, visando o estabelecimento de novos padrões para a representação formal de conteúdos nas páginas Web. Com sua implantação, cujo prazo inicialmente previsto foi de dez anos, será possível a expressão de conceitos nos conteúdos dos hipertextos, que representarão objetos classificados por uma ontologia, viabilizando assim o uso de sistemas, baseados em conhecimento, implementados por agentes inteligentes de software. O projeto DEEPSIA foi concebido como uma solução centrada no comprador, ao contrário dos atuais Market Places, para resolver o problema da localização de páginas Web com a descrição de produtos à venda, fazendo uso de métodos de classificação de textos, apoiados pelos algoritmos k-NN e C4.5, no suporte ao processo decisório realizado por um agente previsto em sua arquitetura, o Crawler Agent. Os testes com o sistema em sites brasileiros denotaram a necessidade pela sua adaptação em diversos aspectos, incluindo-se o processo decisório envolvido, que foi abordado pelo presente trabalho. A solução para o problema envolveu a aplicação e a avaliação do método Support Vector Machines, e é descrita em detalhes. / The huge growth of the Internet has been occurring since 90s with the arrival of the internet service providers. One important reason is the good acceptance and wide dissemination of the Web. The main problem that affects its scalability and usage is the organization and classification of its content. The current search engines make possible the localization of pages in the Web by means of a lexical comparison among sets of words and the hypertexts contents. In order to find contents that express concepts or object, such as products for sale in electronic commerce sites such mechanisms are inefficient. The proposition of the Semantic Web was announced in 2000 for this purpose, envisioning the establishment of new standards for formal contents representation in the Web pages. With its implementation, whose deadline was initially stated for ten years, it will be possible to express concepts in hypertexts contents, that will fully represent objects classified into an ontology, making possible the use of knowledge based systems implemented by intelligent softwares agents. The DEEPSIA project was conceived as a solution centered in the purchaser, instead of current Market Places, in order to solve the problem of finding Web pages with products for sale description, making use of methods of text classification, with k-NN and C4.5 algorithms, to support the decision problem to be solved by an specific agent designed, the Crawler Agent. The tests of the system in Brazilian sites have denoted the necessity for its adaptation in many aspects, including the involved decision process, which was focused in present work. The solution for the problem includes the application and evaluation of the Support Vector Machines method, and it is described in detail.
146

An Approach Towards Self-Supervised Classification Using Cyc

Coursey, Kino High 12 1900 (has links)
Due to the long duration required to perform manual knowledge entry by human knowledge engineers it is desirable to find methods to automatically acquire knowledge about the world by accessing online information. In this work I examine using the Cyc ontology to guide the creation of Naïve Bayes classifiers to provide knowledge about items described in Wikipedia articles. Given an initial set of Wikipedia articles the system uses the ontology to create positive and negative training sets for the classifiers in each category. The order in which classifiers are generated and used to test articles is also guided by the ontology. The research conducted shows that a system can be created that utilizes statistical text classification methods to extract information from an ad-hoc generated information source like Wikipedia for use in a formal semantic ontology like Cyc. Benefits and limitations of the system are discussed along with future work.
147

Context matters : Classifying Swedish texts using BERT's deep bidirectional word embeddings

Holmer, Daniel January 2020 (has links)
When classifying texts using a linear classifier, the texts are commonly represented as feature vectors. Previous methods to represent features as vectors have been unable to capture the context of individual words in the texts, in theory leading to a poor representation of natural language. Bidirectional Encoder Representations from Transformers (BERT), uses a multi-headed self-attention mechanism to create deep bidirectional feature representations, able to model the whole context of all words in a sequence. A BERT model uses a transfer learning approach, where it is pre-trained on a large amount of data and can be further fine-tuned for several down-stream tasks. This thesis uses one multilingual, and two dedicated Swedish BERT models, for the task of classifying Swedish texts as of either easy-to-read or standard complexity in their respective domains. The performance on the text classification task using the different models is then compared both with feature representation methods used in earlier studies, as well as with the other BERT models. The results show that all models performed better on the classification task than the previous methods of feature representation. Furthermore, the dedicated Swedish models show better performance than the multilingual model, with the Swedish model pre-trained on more diverse data outperforming the other.
148

[pt] CLASSIFICAÇÃO DE FALHAS DE EQUIPAMENTOS DE UNIDADE DE INTERVENÇÃO EM CONSTRUÇÃO DE POÇOS MARÍTIMOS POR MEIO DE MINERAÇÃO TEXTUAL / [en] TEXT CLASSIFICATION OF OFFSHORE RIG EQUIPMENT FAILURE

07 April 2020 (has links)
[pt] A construção de poços marítimos tem se mostrado uma atividade complexa e de alto risco. Para efetuar esta atividade as empresas se valem principalmente das unidades de intervenção de poços, também conhecidas como sondas. Estas possuem altos valores de taxas diárias de uso devido à manutenção preventiva da unidade em si, mas também por falhas as quais seus equipamentos estão sujeitos. No cenário específico da Petrobras, em junho de 2011, foi implantado no banco de dados da empresa um maior detalhamento na classificação das falhas de equipamentos de sonda. Com isso gerou-se uma descontinuidade nos registros da empresa e a demanda para adequar estes casos menos detalhados à classificação atual, mais completa. Os registros são compostos basicamente de informação textual. Para um passivo de 3384 registros, seria inviável alocar uma pessoa para classificá-los. Com isso vislumbrou-se uma ferramenta que pudesse efetuar esta classificação da forma mais automatizada possível, utilizando os registros feitos após junho de 2011 como base. O objetivo principal deste trabalho é de sanar esta descontinuidade nos registros de falha de equipamentos de sonda. Os dados foram tratados e transformados por meio de ferramentas de mineração textual bem como processados pelo algoritmo de aprendizado supervisionado SVM (Support Vector Machines). Ao final, após obter a melhor configuração do modelo, este foi aplicado às informações textuais do passivo de anormalidades, atribuindo suas classes de acordo com o novo sistema de classificação. / [en] Off-shore well construction has shown to be a complex and risky activity. In order to build off-shore wells, operators rely mainly on off-shore rigs. These rigs have an expensive day rate, related to their rental and maintenance, but also due to their equipment failure. At off-shore Petrobras scenario, on June of 2011, was implemented at the company database a better detailing on the classification of rig equipment failure. That brought a discontinuity to the database records and created a demand for adequacy of the former classification to the new classification structure. Basically, rig equipment failure records are based on textual information. For a liability of 3384 records, it was unable for one person to manage the task. Therefore, an urge came for a tool that could classify these records automatically, using database records already classified under the new labels. The main purpose of this work is to overcome this database discontinuity. Data was treated and transformed through text mining tools and then processed by supervised learning algorithm SVM (Support Vector Machines). After obtaining the best model configuration, the old records were submitted under this model and were classified according to the new classification structure.
149

Sledovač aktuálního dění / Actual Events Tracker

Odstrčilík, Martin January 2013 (has links)
The goal of the master thesis project was to develop an application for tracking of actual events in the surrounding area of the users. This application should allow the users to view events, create new events and add comments to existing ones. Beyond the implementation of developed application, this project deals with an analysis of the presented problem. The analysis includes a comparison with existing solutions and search for available technologies and frameworks applicable for implementation. Another part inside this work is description of the theory in behind of data classification that is internally used for event and comment analysis. This work also includes a design of appliction including design of user interface, software architecture, database, communication protocol and data classifiers. The main part of this project, the implementation, is described aftewards. At the end of this work, there is a summary of the whole process and also there are given some ideas about enhancing the application in the future.
150

Deep Learning för klassificering av kundsupport-ärenden

Jonsson, Max January 2020 (has links)
Företag och organisationer som tillhandahåller kundsupport via e-post kommer över tid att samla på sig stora mängder textuella data. Tack vare kontinuerliga framsteg inom Machine Learning ökar ständigt möjligheterna att dra nytta av tidigare insamlat data för att effektivisera organisationens framtida supporthantering. Syftet med denna studie är att analysera och utvärdera hur Deep Learning kan användas för att automatisera processen att klassificera supportärenden. Studien baseras på ett svenskt företags domän där klassificeringarna sker inom företagets fördefinierade kategorier. För att bygga upp ett dataset extraherades supportärenden inkomna via e-post (par av rubrik och meddelande) från företagets supportdatabas, där samtliga ärenden tillhörde en av nio distinkta kategorier. Utvärderingen gjordes genom att analysera skillnaderna i systemets uppmätta precision då olika metoder för datastädning användes, samt då de neurala nätverken byggdes upp med olika arkitekturer. En avgränsning gjordes att endast undersöka olika typer av Convolutional Neural Networks (CNN) samt Recurrent Neural Networks (RNN) i form av både enkel- och dubbelriktade Long Short Time Memory (LSTM) celler. Resultaten från denna studie visar ingen ökning i precision för någon av de undersökta datastädningsmetoderna. Dock visar resultaten att en begränsning av den använda ordlistan heller inte genererar någon negativ effekt. En begränsning av ordlistan kan fortfarande vara användbar för att minimera andra effekter så som exempelvis träningstiden, och eventuellt även minska risken för överanpassning. Av de undersökta nätverksarkitekturerna presterade CNN bättre än RNN på det använda datasetet. Den mest gynnsamma nätverksarkitekturen var ett nätverk med en konvolution per pipeline som för två olika test-set genererade precisioner på 79,3 respektive 75,4 procent. Resultaten visar också att några kategorier är svårare för nätverket att klassificera än andra, eftersom dessa inte är tillräckligt distinkta från resterande kategorier i datasetet. / Companies and organizations providing customer support via email will over time grow a big corpus of text documents. With advances made in Machine Learning the possibilities to use this data to improve the customer support efficiency is steadily increasing. The aim of this study is to analyze and evaluate the use of Deep Learning methods for automizing the process of classifying support errands. This study is based on a Swedish company’s domain where the classification was made within the company’s predefined categories. A dataset was built by obtaining email support errands (subject and body pairs) from the company’s support database. The dataset consisted of data belonging to one of nine separate categories. The evaluation was done by analyzing the alteration in classification accuracy when using different methods for data cleaning and by using different network architectures. A delimitation was set to only examine the effects by using different combinations of Convolutional Neural Networks (CNN) and Recurrent Neural Networks (RNN) in the shape of both unidirectional and bidirectional Long Short Time Memory (LSTM) cells. The results of this study show no increase in classification accuracy by any of the examined data cleaning methods. However, a feature reduction of the used vocabulary is proven to neither have any negative impact on the accuracy. A feature reduction might still be beneficial to minimize other side effects such as the time required to train a network, and possibly to help prevent overfitting. Among the examined network architectures CNN were proven to outperform RNN on the used dataset. The most accurate network architecture was a single convolutional network which on two different test sets reached classification rates of 79,3 and 75,4 percent respectively. The results also show some categories to be harder to classify than others, due to them not being distinct enough towards the rest of the categories in the dataset.

Page generated in 0.039 seconds