• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 78
  • 7
  • 4
  • 3
  • 2
  • 1
  • 1
  • 1
  • Tagged with
  • 134
  • 134
  • 43
  • 41
  • 37
  • 34
  • 33
  • 29
  • 25
  • 25
  • 23
  • 23
  • 23
  • 22
  • 20
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
121

Investigating Performance of Different Models at Short Text Topic Modelling / En jämförelse av textrepresentationsmodellers prestanda tillämpade för ämnesinnehåll i korta texter

Akinepally, Pratima Rao January 2020 (has links)
The key objective of this project was to quantitatively and qualitatively assess the performance of a sentence embedding model, Universal Sentence Encoder (USE), and a word embedding model, word2vec, at the task of topic modelling. The first step in the process was data collection. The data used for the project was podcast descriptions available at Spotify, and the topics associated with them. Following this, the data was used to generate description vectors and topic vectors using the embedding models, which were then used to assign topics to descriptions. The results from this study led to the conclusion that embedding models are well suited to this task, and that overall the USE outperforms the word2vec models. / Det huvudsakliga syftet med det i denna uppsats rapporterade projektet är att kvantitativt och kvalitativt utvärdera och jämföra hur väl Universal Sentence Encoder USE, ett semantiskt vektorrum för meningar, och word2vec, ett semantiskt vektorrum för ord, fungerar för att modellera ämnesinnehåll i text. Projektet har som träningsdata använt skriftliga sammanfattningar och ämnesetiketter för podd-episoder som gjorts tillgängliga av Spotify. De skriftliga sammanfattningarna har använts för att generera både vektorer för de enskilda podd-episoderna och för de ämnen de behandlar. De båda ansatsernas vektorer har sedan utvärderats genom att de använts för att tilldela ämnen till beskrivningar ur en testmängd. Resultaten har sedan jämförts och leder både till den allmänna slutsatsen att semantiska vektorrum är väl lämpade för den här sortens uppgifter, och att USE totalt sett överträffar word2vec-modellerna.
122

Supporting Source Code Comprehension During Software Evolution and Maintenance

Alhindawi, Nouh Talal 30 July 2013 (has links)
No description available.
123

Vers une représentation du contexte thématique en Recherche d'Information / Generative models of topical context for Information Retrieval

Deveaud, Romain 29 November 2013 (has links)
Quand des humains cherchent des informations au sein de bases de connaissancesou de collections de documents, ils utilisent un système de recherche d’information(SRI) faisant office d’interface. Les utilisateurs doivent alors transmettre au SRI unereprésentation de leur besoin d’information afin que celui-ci puisse chercher des documentscontenant des informations pertinentes. De nos jours, la représentation du besoind’information est constituée d’un petit ensemble de mots-clés plus souvent connu sousla dénomination de « requête ». Or, quelques mots peuvent ne pas être suffisants pourreprésenter précisément et efficacement l’état cognitif complet d’un humain par rapportà son besoin d’information initial. Sans une certaine forme de contexte thématiquecomplémentaire, le SRI peut ne pas renvoyer certains documents pertinents exprimantdes concepts n’étant pas explicitement évoqués dans la requête.Dans cette thèse, nous explorons et proposons différentes méthodes statistiques, automatiqueset non supervisées pour la représentation du contexte thématique de larequête. Plus spécifiquement, nous cherchons à identifier les différents concepts implicitesd’une requête formulée par un utilisateur sans qu’aucune action de sa part nesoit nécessaire. Nous expérimentons pour cela l’utilisation et la combinaison de différentessources d’information générales représentant les grands types d’informationauxquels nous sommes confrontés quotidiennement sur internet. Nous tirons égalementparti d’algorithmes de modélisation thématique probabiliste (tels que l’allocationde Dirichlet latente) dans le cadre d’un retour de pertinence simulé. Nous proposonspar ailleurs une méthode permettant d’estimer conjointement le nombre de conceptsimplicites d’une requête ainsi que l’ensemble de documents pseudo-pertinent le plusapproprié afin de modéliser ces concepts. Nous évaluons nos approches en utilisantquatre collections de test TREC de grande taille. En annexes, nous proposons égalementune approche de contextualisation de messages courts exploitant des méthodesde recherche d’information et de résumé automatique / When searching for information within knowledge bases or document collections,humans use an information retrieval system (IRS). So that it can retrieve documentscontaining relevant information, users have to provide the IRS with a representationof their information need. Nowadays, this representation of the information need iscomposed of a small set of keywords often referred to as the « query ». A few wordsmay however not be sufficient to accurately and effectively represent the complete cognitivestate of a human with respect to her initial information need. A query may notcontain sufficient information if the user is searching for some topic in which she is notconfident at all. Hence, without some kind of context, the IRS could simply miss somenuances or details that the user did not – or could not – provide in query.In this thesis, we explore and propose various statistic, automatic and unsupervisedmethods for representing the topical context of the query. More specifically, we aim toidentify the latent concepts of a query without involving the user in the process norrequiring explicit feedback. We experiment using and combining several general informationsources representing the main types of information we deal with on a dailybasis while browsing theWeb.We also leverage probabilistic topic models (such as LatentDirichlet Allocation) in a pseudo-relevance feedback setting. Besides, we proposea method allowing to jointly estimate the number of latent concepts of a query andthe set of pseudo-relevant feedback documents which is the most suitable to modelthese concepts. We evaluate our approaches using four main large TREC test collections.In the appendix of this thesis, we also propose an approach for contextualizingshort messages which leverages both information retrieval and automatic summarizationtechniques
124

Miljöpartiet and the never-ending nuclear energy debate : A computational rhetorical analysis of Swedish climate policy

Dickerson, Claire January 2022 (has links)
The domain of rhetoric has changed dramatically since its inception as the art of persuasion. It has adapted to encompass many forms of digital media, including, for example, data visualization and coding as a form of literature, but the approach has frequently been that of an outsider looking in. The use of comprehensive computational tools as a part of rhetorical analysis has largely been lacking. In this report, we attempt to address this lack by means of three case studies in natural language processing tasks, all of which can be used as part of a computational approach to rhetoric. At this same moment in time, it is becoming all the more important to transition to renewable energy in order to keep global warming under 1.5 degrees Celsius and ensure that countries meet the conditions of the Paris Agreement. Thus, we make use of speech data on climate policy from the Swedish parliament to ground these three analyses in semantic textual similarity, topic modeling, and political party attribution. We find that speeches are, to a certain extent, consistent within parties, given that a slight majority of most semantically similar speeches come from the same party. We also find that some of the most common topics discussed in these speeches are nuclear energy and the Swedish Green party, purported environmental risks due to renewable energy sources, and the job market. Finally, we find that though pairs of speeches are semantically similar, party rhetoric on the whole is generally not unique enough for speeches to be distinguishable by party. These results then open the door for a broader exploration of computational rhetoric for Swedish political science in the future.
125

Reclaiming the “C” in ICT4D: A Critical Examination of the Discursive (Un)Freedoms in Digital State Policy and News Media of Bangladesh and Norway

Ala-Uddin, Mohammad 11 May 2022 (has links)
No description available.
126

Neural Methods Towards Concept Discovery from Text via Knowledge Transfer

Das, Manirupa January 2019 (has links)
No description available.
127

[en] EXTRACTING RELIABLE INFORMATION FROM LARGE COLLECTIONS OF LEGAL DECISIONS / [pt] EXTRAINDO INFORMAÇÕES CONFIÁVEIS DE GRANDES COLEÇÕES DE DECISÕES JUDICIAIS

FERNANDO ALBERTO CORREIA DOS SANTOS JUNIOR 09 June 2022 (has links)
[pt] Como uma consequência natural da digitalização do sistema judiciário brasileiro, um grande e crescente número de documentos jurídicos tornou-se disponível na internet, especialmente decisões judiciais. Como ilustração, em 2020, o Judiciário brasileiro produziu 25 milhões de decisões. Neste mesmo ano, o Supremo Tribunal Federal (STF), a mais alta corte do judiciário brasileiro, produziu 99.5 mil decisões. Alinhados a esses valores, observamos uma demanda crescente por estudos voltados para a extração e exploração do conhecimento jurídico de grandes acervos de documentos legais. Porém, ao contrário do conteúdo de textos comuns (como por exemplo, livro, notícias e postagem de blog), o texto jurídico constitui um caso particular de uso de uma linguagem altamente convencionalizada. Infelizmente, pouca atenção é dada à extração de informações em domínios especializados, como textos legais. Do ponto de vista temporal, o Judiciário é uma instituição em constante evolução, que se molda para atender às demandas da sociedade. Com isso, o nosso objetivo é propor um processo confiável de extração de informações jurídicas de grandes acervos de documentos jurídicos, tomando como base o STF e as decisões monocráticas publicadas por este tribunal nos anos entre 2000 e 2018. Para tanto, pretendemos explorar a combinação de diferentes técnicas de Processamento de Linguagem Natural (PLN) e Extração de Informação (EI) no contexto jurídico. Da PLN, pretendemos explorar as estratégias automatizadas de reconhecimento de entidades nomeadas no domínio legal. Do ponto da EI, pretendemos explorar a modelagem dinâmica de tópicos utilizando a decomposição tensorial como ferramenta para investigar mudanças no raciocinio juridico presente nas decisões ao lonfo do tempo, a partir da evolução do textos e da presença de entidades nomeadas legais. Para avaliar a confiabilidade, exploramos a interpretabilidade do método empregado, e recursos visuais para facilitar a interpretação por parte de um especialista de domínio. Como resultado final, a proposta de um processo confiável e de baixo custo para subsidiar novos estudos no domínio jurídico e, também, propostas de novas estratégias de extração de informações em grandes acervos de documentos. / [en] As a natural consequence of the Brazilian Judicial System’s digitization, a large and increasing number of legal documents have become available on the Internet, especially judicial decisions. As an illustration, in 2020, 25 million decisions were produced by the Brazilian Judiciary. Meanwhile, the Brazilian Supreme Court (STF), the highest judicial body in Brazil, alone has produced 99.5 thousand decisions. In line with those numbers, we face a growing demand for studies focused on extracting and exploring the legal knowledge hidden in those large collections of legal documents. However, unlike typical textual content (e.g., book, news, and blog post), the legal text constitutes a particular case of highly conventionalized language. Little attention is paid to information extraction in specialized domains such as legal texts. From a temporal perspective, the Judiciary itself is a constantly evolving institution, which molds itself to cope with the demands of society. Therefore, our goal is to propose a reliable process for legal information extraction from large collections of legal documents, based on the STF scenario and the monocratic decisions published by it between 2000 and 2018. To do so, we intend to explore the combination of different Natural Language Processing (NLP) and Information Extraction (IE) techniques on legal domain. From NLP, we explore automated named entity recognition strategies in the legal domain. From IE, we explore dynamic topic modeling with tensor decomposition as a tool to investigate the legal reasoning changes embedded in those decisions over time through textual evolution and the presence of the legal named entities. For reliability, we explore the interpretability of the methods employed. Also, we add visual resources to facilitate interpretation by a domain specialist. As a final result, we expect to propose a reliable and cost-effective process to support further studies in the legal domain and, also, to propose new strategies for information extraction on a large collection of documents.
128

Extending the explanatory power of factor pricing models using topic modeling / Högre förklaringsgrad hos faktorprismodeller genom topic modeling

Everling, Nils January 2017 (has links)
Factor models attribute stock returns to a linear combination of factors. A model with great explanatory power (R2) can be used to estimate the systematic risk of an investment. One of the most important factors is the industry which the company of the stock operates in. In commercial risk models this factor is often determined with a manually constructed stock classification scheme such as GICS. We present Natural Language Industry Scheme (NLIS), an automatic and multivalued classification scheme based on topic modeling. The topic modeling is performed on transcripts of company earnings calls and identifies a number of topics analogous to industries. We use non-negative matrix factorization (NMF) on a term-document matrix of the transcripts to perform the topic modeling. When set to explain returns of the MSCI USA index we find that NLIS consistently outperforms GICS, often by several hundred basis points. We attribute this to NLIS’ ability to assign a stock to multiple industries. We also suggest that the proportions of industry assignments for a given stock could correspond to expected future revenue sources rather than current revenue sources. This property could explain some of NLIS’ success since it closely relates to theoretical stock pricing. / Faktormodeller förklarar aktieprisrörelser med en linjär kombination av faktorer. En modell med hög förklaringsgrad (R2) kan användas föratt skatta en investerings systematiska risk. En av de viktigaste faktorerna är aktiebolagets industritillhörighet. I kommersiella risksystem bestäms industri oftast med ett aktieklassifikationsschema som GICS, publicerat av ett finansiellt institut. Vi presenterar Natural Language Industry Scheme (NLIS), ett automatiskt klassifikationsschema baserat på topic modeling. Vi utför topic modeling på transkript av aktiebolags investerarsamtal. Detta identifierar ämnen, eller topics, som är jämförbara med industrier. Topic modeling sker genom icke-negativmatrisfaktorisering (NMF) på en ord-dokumentmatris av transkripten. När NLIS används för att förklara prisrörelser hos MSCI USA-indexet finner vi att NLIS överträffar GICS, ofta med 2-3 procent. Detta tillskriver vi NLIS förmåga att ge flera industritillhörigheter åt samma aktie. Vi föreslår också att proportionerna hos industritillhörigheterna för en aktie kan motsvara förväntade inkomstkällor snarare än nuvarande inkomstkällor. Denna egenskap kan också vara en anledning till NLIS framgång då den nära relaterar till teoretisk aktieprissättning.
129

Evaluating Hierarchical LDA Topic Models for Article Categorization

Lindgren, Jennifer January 2020 (has links)
With the vast amount of information available on the Internet today, helping users find relevant content has become a prioritized task in many software products that recommend news articles. One such product is Opera for Android, which has a news feed containing articles the user may be interested in. In order to easily determine what articles to recommend, they can be categorized by the topics they contain. One approach of categorizing articles is using Machine Learning and Natural Language Processing (NLP). A commonly used model is Latent Dirichlet Allocation (LDA), which finds latent topics within large datasets of for example text articles. An extension of LDA is hierarchical Latent Dirichlet Allocation (hLDA) which is an hierarchical variant of LDA. In hLDA, the latent topics found among a set of articles are structured hierarchically in a tree. Each node represents a topic, and the levels represent different levels of abstraction in the topics. A further extension of hLDA is constrained hLDA, where a set of predefined, constrained topics are added to the tree. The constrained topics are extracted from the dataset by grouping highly correlated words. The idea of constrained hLDA is to improve the topic structure derived by a hLDA model by making the process semi-supervised. The aim of this thesis is to create a hLDA and a constrained hLDA model from a dataset of articles provided by Opera. The models should then be evaluated using the novel metric word frequency similarity, which is a measure of the similarity between the words representing the parent and child topics in a hierarchical topic model. The results show that word frequency similarity can be used to evaluate whether the topics in a parent-child topic pair are too similar, so that the child does not specify a subtopic of the parent. It can also be used to evaluate if the topics are too dissimilar, so that the topics seem unrelated and perhaps should not be connected in the hierarchy. The results also show that the two topic models created had comparable word frequency similarity scores. None of the models seemed to significantly outperform the other with regard to the metric.
130

Hierarchical Text Topic Modeling with Applications in Social Media-Enabled Cyber Maintenance Decision Analysis and Quality Hypothesis Generation

SUI, ZHENHUAN 27 October 2017 (has links)
No description available.

Page generated in 0.2536 seconds