• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 2
  • 1
  • 1
  • Tagged with
  • 4
  • 4
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • 1
  • 1
  • 1
  • 1
  • 1
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
1

Automatic language identification of short texts

Avenberg, Anna January 2020 (has links)
The world is growing more connected through the use of online communication, exposing software and humans to all the world's languages. While devices are able to understand and share the raw data between themselves and with humans, the information itself is not expressed in a monolithic format. This causes issues both in the human to computer interaction and human to human communication. Automatic language identification (LID) is a field within artificial intelligence and natural language processing that strives to solve a part of these issues by identifying languages from text, sign language and speech. One of the challenges is to identify the short pieces of text that can be found online, such as messages, comments and posts on social media. This is due to the small amount of information they carry. The goal of this thesis has been to build a machine learning model that can identify the language for these short pieces of text. A long short-term memory (LSTM) machine learning model was built and benchmarked towards Facebook's fastText model. The results show how the LSTM model reached an accuracy of around 95% and the fastText model used as comparison reached an accuracy of 97%. The LSTM model struggled more when identifying texts shorter than 50 characters than with longer text. The classification performance of the LSTM model was also relatively poor in cases where languages were similar, like Croatian and Serbian. Both the LSTM model and the fastText model reached accuracy's above 94% which can be considered high, depending on how it is evaluated. There are however many improvements and possible future work to be considered; looking further into texts shorter than 50 characters, evaluating the model's softmax output vector values and how to handle similar languages.
2

Amélioration du système de recueils d'information de l'entreprise Semantic Group Company grâce à la constitution de ressources sémantiques / Improvement of the information system of the Semantic Group Company through the creation of semantic resources

Yahaya Alassan, Mahaman Sanoussi 05 October 2017 (has links)
Prendre en compte l'aspect sémantique des données textuelles lors de la tâche de classification s'est imposé comme un réel défi ces dix dernières années. Cette difficulté vient s'ajouter au fait que la plupart des données disponibles sur les réseaux sociaux sont des textes courts, ce qui a notamment pour conséquence de rendre les méthodes basées sur la représentation "bag of words" peu efficientes. L'approche proposée dans ce projet de recherche est différente des approches proposées dans les travaux antérieurs sur l'enrichissement des messages courts et ce pour trois raisons. Tout d'abord, nous n'utilisons pas des bases de connaissances externes comme Wikipedia parce que généralement les messages courts qui sont traités par l'entreprise proveniennent des domaines spécifiques. Deuxièment, les données à traiter ne sont pas utilisées pour la constitution de ressources à cause du fonctionnement de l'outil. Troisièment, à notre connaissance il n'existe pas des travaux d'une part qui exploitent des données structurées comme celles de l'entreprise pour constituer des ressources sémantiques, et d'autre part qui mesurent l'impact de l'enrichissement sur un système interactif de regroupement de flux de textes. Dans cette thèse, nous proposons la création de ressources permettant d'enrichir les messages courts afin d'améliorer la performance de l'outil du regroupement sémantique de l'entreprise Succeed Together. Ce dernier implémente des méthodes de classification supervisée et non supervisée. Pour constituer ces ressources, nous utilisons des techniques de fouille de données séquentielles. / Taking into account the semantic aspect of the textual data during the classification task has become a real challenge in the last ten years. This difficulty is in addition to the fact that most of the data available on social networks are short texts, which in particular results in making methods based on the "bag of words" representation inefficient. The approach proposed in this research project is different from the approaches proposed in previous work on the enrichment of short messages for three reasons. First, we do not use external knowledge like Wikipedia because typically short messages that are processed by the company come from specific domains. Secondly, the data to be processed are not used for the creation of resources because of the operation of the tool. Thirdly, to our knowledge there is no work on the one hand, which uses structured data such as the company's data to constitute semantic resources, and on the other hand, which measure the impact of enrichment on a system Interactive grouping of text flows. In this thesis, we propose the creation of resources enabling to enrich the short messages in order to improve the performance of the tool of the semantic grouping of the company Succeed Together. The tool implements supervised and unsupervised classification methods. To build these resources, we use sequential data mining techniques.
3

On Clustering and Evaluation of Narrow Domain Short-Test Corpora

Pinto Avendaño, David Eduardo 23 July 2008 (has links)
En este trabajo de tesis doctoral se investiga el problema del agrupamiento de conjuntos especiales de documentos llamados textos cortos de dominios restringidos. Para llevar a cabo esta tarea, se han analizados diversos corpora y métodos de agrupamiento. Mas aún, se han introducido algunas medidas de evaluación de corpus, técnicas de selección de términos y medidas para la validez de agrupamiento con la finalidad de estudiar los siguientes problemas: -Determinar la relativa dificultad de un corpus para ser agrupado y estudiar algunas de sus características como longitud de los textos, amplitud del dominio, estilometría, desequilibrio de clases y estructura. -Contribuir en el estado del arte sobre el agrupamiento de corpora compuesto de textos cortos de dominios restringidos El trabajo de investigación que se ha llevado a cabo se encuentra parcialmente enfocado en el "agrupamiento de textos cortos". Este tema se considera relevante dado el modo actual y futuro en que las personas tienden a usar un "lenguaje reducido" constituidos por textos cortos (por ejemplo, blogs, snippets, noticias y generación de mensajes de textos como el correo electrónico y el chat). Adicionalmente, se estudia la amplitud del dominio de corpora. En este sentido, un corpus puede ser considerado como restringido o amplio si el grado de traslape de vocabulario es alto o bajo, respectivamente. En la tarea de categorización, es bastante complejo lidiar con corpora de dominio restringido tales como artículos científicos, reportes técnicos, patentes, etc. El objetivo principal de este trabajo consiste en estudiar las posibles estrategias para tratar con los siguientes dos problemas: a) las bajas frecuencias de los términos del vocabulario en textos cortos, y b) el alto traslape de vocabulario asociado a dominios restringidos. Si bien, cada uno de los problemas anteriores es un reto suficientemente alto, cuando se trata con textos cortos de dominios restringidos, la complejidad del problema se incr / Pinto Avendaño, DE. (2008). On Clustering and Evaluation of Narrow Domain Short-Test Corpora [Tesis doctoral no publicada]. Universitat Politècnica de València. https://doi.org/10.4995/Thesis/10251/2641 / Palancia
4

Recommendation of Text Properties for Short Texts with the Use of Machine Learning : A Comparative Study of State-of-the-Art Techniques Including BERT and GPT-2 / Rekommendation av textegenskaper för korta texter med hjälp av maskininlärning : En jämförande studie av de senaste teknikerna inklusive BERT och GPT-2

Zapata, Luciano January 2023 (has links)
Text mining has gained considerable attention due to the extensive usage ofelectronic documents. The significant increase in electronic document usagehas created a necessity to process and analyze them effectively. Rule-basedsystems have traditionally been used to evaluate short pieces of text, but theyhave limitations, including the need for significant manual effort to create andmaintain rules and a high risk of complex bugs. As a result, text classificationhas emerged as a promising solution for extracting meaning from short texts,which are defined as texts limited by a specific character count or word count.This study investigates the feasibility and effectiveness of text classification inclassifying short pieces of text according to their appropriate text properties,based on users’ intentions in the text. The study focuses on comparing twotransformer models, GPT-2 and BERT, in their ability to classify short texts.While other studies have compared these models in intention classificationof text, this study is unique in its examination of their performance onshort pieces of text in this specific context. This study uses user-labelleddata to fine-tune the models, which are then tested on a test dataset fromthe same source. The comparative analysis of the models indicates thatBERT generally outperforms GPT-2 in classifying users’ intentions basedon the appropriate text properties, with an F1-score of 0.68 compared toGPT-2’s F1-score of 0.51. However, GPT-2 performed better on certainclosely related classes, suggesting that both models capture interesting featuresof these classes. Furthermore, the results demonstrated that some classeswere accurately classified despite being context-dependent and positionedwithin longer sentences, indicating that the models likely capture features ofthese classes and facilitate their classification. Both models show promisingpotential as classification models for short texts based on users’ intentions andtheir associated text properties. However, further research may be necessary toimprove their accuracy. Suggestions for enhancing their performance includeutilizing more recent versions of GPT, such as GPT-3 or GPT-4, optimizinghyperparameters, adjusting preprocessing methods, and adopting alternativeapproaches to handle data imbalance. Additionally, testing the models ondatasets from diverse domains with more intricate contexts could providegreater insight into their limitations. / Textutvinning har fått stor uppmärksamhet på grund av den omfattande användningen av elektroniska dokument. Den betydande ökningen av användningen av elektroniska dokument har skapat ett behov av att bearbeta och analysera dem på ett effektivt sätt. Regelbaserade system har traditionellt använts för att utvärdera korta textstycken, men de har begränsningar, bland annat behovet av betydande manuellt arbete för att skapa och upprätthålla regler och en hög risk för komplexa fel. Som ett resultat av detta har textklassificering framstått som en lovande lösning för att utvinna mening ur korta texter, som definieras som texter som begränsas av ett visst antal tecken eller ord. I den här studien undersöks om textklassificering är genomförbar och effektiv när det gäller att klassificera korta textstycken enligt deras lämpliga textegenskaper, baserat på användarnas intentioner i texten. Studien fokuserar på att jämföra två transformatormodeller, GPT-2 och BERT, i deras förmåga att klassificera korta texter. Även om andra studier har jämfört dessa modeller vid avsiktsklassificering av text, är denna studie unik i sin undersökning av deras prestanda för korta textstycken i detta specifika sammanhang. I studien används användarmärkta data för att finjustera modellerna, som sedan testas på ett testdataset från samma källa. Den jämförande analysen av modellerna visar att BERT generellt sett presterar bättre än GPT-2 när det gäller att klassificera användarnas avsikter baserat på lämpliga textegenskaper, med ett F1-värde på 0,68 jämfört med GPT-2:s F1-värde på 0,51. GPT-2 presterade dock bättre på vissa närbesläktade klasser, vilket tyder på att båda modellerna fångar intressanta egenskaper hos dessa klasser. Dessutom visade resultaten att vissa klasser klassificerades korrekt trots att de var kontextberoende och placerade i längre meningar, vilket tyder på att modellerna sannolikt fångar upp egenskaper hos dessa klasser och underlättar deras klassificering. Båda modellerna visar lovande potential som klassificeringsmodeller för korta texter baserade på användarnas intentioner och deras tillhörande textegenskaper. Ytterligare forskning kan dock vara nödvändig för att förbättra deras noggrannhet. Förslag för att förbättra deras prestanda är bland annat att använda nyare versioner av GPT, till exempel GPT-3 eller GPT-4, optimera hyperparametrar, justera förbehandlingsmetoder och anta alternativa metoder för att hantera obalans i data. Om modellerna dessutom testas på dataset från olika områden med mer komplicerade sammanhang kan man få en bättre insikt i deras begränsningar.

Page generated in 0.0385 seconds