• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 80
  • 7
  • 4
  • 3
  • 3
  • 1
  • 1
  • 1
  • Tagged with
  • 138
  • 138
  • 43
  • 41
  • 38
  • 36
  • 33
  • 29
  • 27
  • 25
  • 24
  • 23
  • 23
  • 22
  • 20
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
131

Extending the explanatory power of factor pricing models using topic modeling / Högre förklaringsgrad hos faktorprismodeller genom topic modeling

Everling, Nils January 2017 (has links)
Factor models attribute stock returns to a linear combination of factors. A model with great explanatory power (R2) can be used to estimate the systematic risk of an investment. One of the most important factors is the industry which the company of the stock operates in. In commercial risk models this factor is often determined with a manually constructed stock classification scheme such as GICS. We present Natural Language Industry Scheme (NLIS), an automatic and multivalued classification scheme based on topic modeling. The topic modeling is performed on transcripts of company earnings calls and identifies a number of topics analogous to industries. We use non-negative matrix factorization (NMF) on a term-document matrix of the transcripts to perform the topic modeling. When set to explain returns of the MSCI USA index we find that NLIS consistently outperforms GICS, often by several hundred basis points. We attribute this to NLIS’ ability to assign a stock to multiple industries. We also suggest that the proportions of industry assignments for a given stock could correspond to expected future revenue sources rather than current revenue sources. This property could explain some of NLIS’ success since it closely relates to theoretical stock pricing. / Faktormodeller förklarar aktieprisrörelser med en linjär kombination av faktorer. En modell med hög förklaringsgrad (R2) kan användas föratt skatta en investerings systematiska risk. En av de viktigaste faktorerna är aktiebolagets industritillhörighet. I kommersiella risksystem bestäms industri oftast med ett aktieklassifikationsschema som GICS, publicerat av ett finansiellt institut. Vi presenterar Natural Language Industry Scheme (NLIS), ett automatiskt klassifikationsschema baserat på topic modeling. Vi utför topic modeling på transkript av aktiebolags investerarsamtal. Detta identifierar ämnen, eller topics, som är jämförbara med industrier. Topic modeling sker genom icke-negativmatrisfaktorisering (NMF) på en ord-dokumentmatris av transkripten. När NLIS används för att förklara prisrörelser hos MSCI USA-indexet finner vi att NLIS överträffar GICS, ofta med 2-3 procent. Detta tillskriver vi NLIS förmåga att ge flera industritillhörigheter åt samma aktie. Vi föreslår också att proportionerna hos industritillhörigheterna för en aktie kan motsvara förväntade inkomstkällor snarare än nuvarande inkomstkällor. Denna egenskap kan också vara en anledning till NLIS framgång då den nära relaterar till teoretisk aktieprissättning.
132

Evaluating Hierarchical LDA Topic Models for Article Categorization

Lindgren, Jennifer January 2020 (has links)
With the vast amount of information available on the Internet today, helping users find relevant content has become a prioritized task in many software products that recommend news articles. One such product is Opera for Android, which has a news feed containing articles the user may be interested in. In order to easily determine what articles to recommend, they can be categorized by the topics they contain. One approach of categorizing articles is using Machine Learning and Natural Language Processing (NLP). A commonly used model is Latent Dirichlet Allocation (LDA), which finds latent topics within large datasets of for example text articles. An extension of LDA is hierarchical Latent Dirichlet Allocation (hLDA) which is an hierarchical variant of LDA. In hLDA, the latent topics found among a set of articles are structured hierarchically in a tree. Each node represents a topic, and the levels represent different levels of abstraction in the topics. A further extension of hLDA is constrained hLDA, where a set of predefined, constrained topics are added to the tree. The constrained topics are extracted from the dataset by grouping highly correlated words. The idea of constrained hLDA is to improve the topic structure derived by a hLDA model by making the process semi-supervised. The aim of this thesis is to create a hLDA and a constrained hLDA model from a dataset of articles provided by Opera. The models should then be evaluated using the novel metric word frequency similarity, which is a measure of the similarity between the words representing the parent and child topics in a hierarchical topic model. The results show that word frequency similarity can be used to evaluate whether the topics in a parent-child topic pair are too similar, so that the child does not specify a subtopic of the parent. It can also be used to evaluate if the topics are too dissimilar, so that the topics seem unrelated and perhaps should not be connected in the hierarchy. The results also show that the two topic models created had comparable word frequency similarity scores. None of the models seemed to significantly outperform the other with regard to the metric.
133

Hierarchical Text Topic Modeling with Applications in Social Media-Enabled Cyber Maintenance Decision Analysis and Quality Hypothesis Generation

SUI, ZHENHUAN 27 October 2017 (has links)
No description available.
134

Sentiment Analysis of COVID-19 Vaccine Discourse on Twitter

Andersson, Patrik January 2024 (has links)
The rapid development and disitribution of COVID-19 vaccines have sparked diverse public reactions globally, often reflected through social media platförms like Twitter. This study aims to analyze the sentiment andd public discourse surrounding COVID-19 vaccines on Twitter, utilizing advanced text classification techniques to navigare the vast, unstructured nature of sicial media dfata. By implementing sentiment analysis, the research categoizes tweets into positive, negative, and neutral sentiments to gauge public opinion more effectively. In-depth analysis thorugh topic modelingtecniques helped identify seven key topicvs influencing public sentiment including aspects related to efficiacy, logisticl challenges, safety concens, and personal experiences, each varying in prominence depending on the country, as well as the specific timeline of vaccine deployment. Additionally, this study explorers geographical variations in sentiment, notig significant differences in public opinion across different countries. These variations could be tied to local cultural, social, and political contexts. Reults from this study show a polarized response towards vaccination, with significant discourse clusers showing either strong supprt for or resistance against the COVID-19 vaccination efforts. This polarization is further pronounced by the logistical challenges and trust issues related to vaccine science, particularly emphasized in tweets from couintries with lower vaccine acceptance rates. This sentiment analysis on Twitter offers valuable insights into the public's perception and acceptancce of COVID-19 vaccines, providing a useful tool for policymakers and public health officials to understand and address publiv concerns effectively. By identifying and understanding the key factors influencing vaccine sentiment, tageted communication strategies can be developed to enhance publiv engagement and vaccine uptake.
135

Traitement automatique du langage naturel pour les textes juridiques : prédiction de verdict et exploitation de connaissances du domaine

Salaün, Olivier 12 1900 (has links)
À l'intersection du traitement automatique du langage naturel et du droit, la prédiction de verdict ("legal judgment prediction" en anglais) est une tâche permettant de représenter la question de la justice prédictive, c'est-à-dire tester les capacités d'un système automatique à prédire le verdict décidé par un juge dans une décision de justice. La thèse présente de bout en bout la mise en place d'une telle tâche formalisée sous la forme d'une classification multilabel, ainsi que différentes stratégies pour tenter d'améliorer les performances des classifieurs. Le tout se base sur un corpus de décisions provenant du Tribunal administratif du logement du Québec (litiges entre propriétaires et locataires). Tout d'abord, un prétraitement préliminaire et une analyse approfondie du corpus permettent d'en tirer les aspects métier les plus saillants. Cette étape primordiale permet de s'assurer que la tâche de prédiction de verdict a du sens, et de mettre en relief des biais devant être pris en considération pour les tâches ultérieures. En effet, à l'issue d'un premier banc d'essai comparant différents modèles sur cette tâche, ces derniers tendent à exacerber des biais préexistant dans le corpus (p. ex. ils donnent encore moins gain de cause aux locataires par rapport à un juge humain). Fort de ce constat, la suite des expériences vise à améliorer les performances de classification et à atténuer ces biais, en se focalisant sur CamemBERT. Pour ce faire, des connaissances du domaine cible (droit du logement) sont exploitées. Une première approche consiste à employer des articles de loi comme données d'entrée qui font l'objet de différentes représentations, mais c'est cependant loin d'être la panacée. Une autre approche employant la modélisation thématique s'intéresse aux thèmes pouvant être extraits à partir du texte décrivant les faits litigieux. Une évaluation automatique et manuelle des thèmes obtenus démontre leur informativité vis-à-vis des motifs amenant des justiciables à se rendre au tribunal. Avec ce constat, la dernière partie de notre travail revisite une nouvelle fois la tâche de prédiction de verdict en s'appuyant à la fois sur des systèmes de recherche d'information (RI), et des thèmes associés aux décisions. Les modèles conçus ici ont la particularité de s'appuyer sur une jurisprudence (décisions passées pertinentes) récoltée selon différents critères de recherche (p. ex. similarité au niveau du texte et/ou des thèmes). Les modèles utilisant des critères de RI basés sur des sacs-de-mots (Lucene) et des thèmes obtiennent des gains significatifs en termes de scores F1 Macro. Cependant, le problème d'amplification des biais persiste encore bien qu'atténué. De manière globale, l'exploitation de connaissances du domaine permet d'améliorer les performances des prédicteurs de verdict, mais la persistance de biais dans les résultats décourage le déploiement de tels modèles à grande échelle dans le monde réel. D'un autre côté, les résultats de la modélisation thématique laissent entrevoir de meilleurs débouchés pour ce qui relève de l'accessibilité et de la lisibilité des documents juridiques par des utilisateurs humains. / At the intersection of natural language processing and law, legal judgment prediction is a task that can represent the problem of predictive justice, or in other words, the capacity of an automated system to predict the verdict decided by a judge in a court ruling. The thesis presents from end to end the implementation of such a task formalized as a multilabel classification, along with different strategies attempting to improve classifiers' performance. The whole work is based on a corpus of decisions from the Administrative housing tribunal of Québec (disputes between landlords and tenants). First of all, a preliminary preprocessing and an in-depth analysis of the corpus highlight its most prominent domain aspects. This crucial step ensures that the verdict prediction task is sound, and also emphasizes biases that must be taken into consideration for future tasks. Indeed, a first testbed comparing different models on this task reveals that they tend to exacerbate biases pre-existing within the corpus (i.e. their verdicts are even less favourable to tenants compared with a human judge). In light of this, the next experiments aim at improving classification performance and at mitigating these biases, by focusing on CamemBERT. In order to do so, knowledge from the target domain (housing law) are exploited. A first approach consists in employing articles of law as input features which are used under different representations, but such method is far from being a panacea. Another approach relying on topic modeling focuses on topics that can be extracted from the text describing the disputed facts. An automatic and manual evaluation of topics obtained shows evidence of their informativeness about reasons leading litigants to go to court. On this basis, the last part of our work revisits the verdict prediction task by relying on both information retrieval (IR) system, and topics assigned to decisions. The models designed here have the particularity to rely on jurisprudence (relevant past cases) retrieved with different search criteria (e.g. similarity at the text or topics level). Models using IR criteria based on bags-of-words (Lucene) and topics obtain significant gains in terms of Macro F1 scores. However, the aforementioned amplified biases issue, though mitigated, still remains. Overall, the exploitation of domain-related knowledge can improve the performance of verdict predictors, but the persistence of biases in the predictions hinders the deployment of such models on a large scale in the real world. On the other hand, results obtained from topic modeling suggest better prospects for anything that can improve the accessibility and readability of legal documents by human users.
136

Student Scientometrics – What do German Students of the Humanities Cite in their Term Papers?

Henning, Tim, Gutiérrez De la Torre, Silvia E., Burghardt, Manuel 11 July 2024 (has links)
No description available.
137

A Topic Modeling approach for Code Clone Detection

Khan, Mohammed Salman 01 January 2019 (has links)
In this thesis work, the potential benefits of Latent Dirichlet Allocation (LDA) as a technique for code clone detection has been described. The objective is to propose a language-independent, effective, and scalable approach for identifying similar code fragments in relatively large software systems. The main assumption is that the latent topic structure of software artifacts gives an indication of the presence of code clones. It can be hypothesized that artifacts with similar topic distributions contain duplicated code fragments and to prove this hypothesis, an experimental investigation using multiple datasets from various application domains were conducted. In addition, CloneTM, an LDA-based working prototype for code clone detection was developed. Results showed that, if calibrated properly, topic modeling can deliver a satisfactory performance in capturing different types of code clones, showing particularity good performance in detecting Type III clones. CloneTM also achieved levels of performance comparable to already existing practical tools that adopt different clone detection strategies.
138

Analyse temporelle et sémantique des réseaux sociaux typés à partir du contenu de sites généré par des utilisateurs sur le Web / Temporal and semantic analysis of richly typed social networks from user-generated content sites on the web

Meng, Zide 07 November 2016 (has links)
Nous proposons une approche pour détecter les sujets, les communautés d'intérêt non disjointes,l'expertise, les tendances et les activités dans des sites où le contenu est généré par les utilisateurs et enparticulier dans des forums de questions-réponses tels que StackOverFlow. Nous décrivons d'abordQASM (Questions & Réponses dans des médias sociaux), un système basé sur l'analyse de réseauxsociaux pour gérer les deux principales ressources d’un site de questions-réponses: les utilisateurs et lecontenu. Nous présentons également le vocabulaire QASM utilisé pour formaliser à la fois le niveaud'intérêt et l'expertise des utilisateurs. Nous proposons ensuite une approche efficace pour détecter lescommunautés d'intérêts. Elle repose sur une autre méthode pour enrichir les questions avec un tag plusgénéral en cas de besoin. Nous comparons trois méthodes de détection sur un jeu de données extrait dusite populaire StackOverflow. Notre méthode basée sur le se révèle être beaucoup plus simple et plusrapide, tout en préservant la qualité de la détection. Nous proposons en complément une méthode pourgénérer automatiquement un label pour un sujet détecté en analysant le sens et les liens de ses mots-clefs.Nous menons alors une étude pour comparer différents algorithmes pour générer ce label. Enfin, nousétendons notre modèle de graphes probabilistes pour modéliser conjointement les sujets, l'expertise, lesactivités et les tendances. Nous le validons sur des données du monde réel pour confirmer l'efficacité denotre modèle intégrant les comportements des utilisateurs et la dynamique des sujets / We propose an approach to detect topics, overlapping communities of interest, expertise, trends andactivities in user-generated content sites and in particular in question-answering forums such asStackOverFlow. We first describe QASM (Question & Answer Social Media), a system based on socialnetwork analysis to manage the two main resources in question-answering sites: users and contents. Wealso introduce the QASM vocabulary used to formalize both the level of interest and the expertise ofusers on topics. We then propose an efficient approach to detect communities of interest. It relies onanother method to enrich questions with a more general tag when needed. We compared threedetection methods on a dataset extracted from the popular Q&A site StackOverflow. Our method basedon topic modeling and user membership assignment is shown to be much simpler and faster whilepreserving the quality of the detection. We then propose an additional method to automatically generatea label for a detected topic by analyzing the meaning and links of its bag of words. We conduct a userstudy to compare different algorithms to choose the label. Finally we extend our probabilistic graphicalmodel to jointly model topics, expertise, activities and trends. We performed experiments with realworlddata to confirm the effectiveness of our joint model, studying the users’ behaviors and topicsdynamics

Page generated in 0.0835 seconds