Spelling suggestions: "subject:"rephrase extraction"" "subject:"euphrase extraction""
1 |
Graph-Based Keyphrase Extraction Using WikipediaDandala, Bharath 12 1900 (has links)
Keyphrases describe a document in a coherent and simple way, giving the prospective reader a way to quickly determine whether the document satisfies their information needs. The pervasion of huge amount of information on Web, with only a small amount of documents have keyphrases extracted, there is a definite need to discover automatic keyphrase extraction systems. Typically, a document written by human develops around one or more general concepts or sub-concepts. These concepts or sub-concepts should be structured and semantically related with each other, so that they can form the meaningful representation of a document. Considering the fact, the phrases or concepts in a document are related to each other, a new approach for keyphrase extraction is introduced that exploits the semantic relations in the document. For measuring the semantic relations between concepts or sub-concepts in the document, I present a comprehensive study aimed at using collaboratively constructed semantic resources like Wikipedia and its link structure. In particular, I introduce a graph-based keyphrase extraction system that exploits the semantic relations in the document and features such as term frequency. I evaluated the proposed system using novel measures and the results obtained compare favorably with previously published results on established benchmarks.
|
2 |
Solving the broken link problem in Walden's PathsDalal, Zubin Jamshed 30 September 2004 (has links)
With the extent of the web expanding at an increasing rate, the problems caused by broken links are reaching epidemic proportions. Studies have indicated that a substantial number of links on the Internet are broken. User surveys indicate broken links are considered the third biggest problem faced on the Internet.
Currently Walden's Paths Path Manager tool is capable of detecting the degree and type of change within a page in a path. Although it also has the ability to highlight missing pages or broken links, it has no method of correcting them thus leaving the broken link problem unsolved. This thesis proposes a solution to this problem in Walden's Paths.
The solution centers on the idea that "significant" keyphrases extracted from the original page can be used to accurately locate the document using a search engine. This thesis proposes an algorithm to extract representative keyphrases to locate exact copies of the original page. In the absence of an exact copy, a similar but separate algorithm is used to extract keyphrases that will help locating similar pages that can be substituted in place of the missing page. Both sets of keyphrases are stored as additions to the page signature in the Path Manager tool and can be used when the original page is removed from its current location on the Web.
|
3 |
SurfKE: A Graph-Based Feature Learning Framework for Keyphrase ExtractionFlorescu, Corina Andreea 08 1900 (has links)
Current unsupervised approaches for keyphrase extraction compute a single importance score for each candidate word by considering the number and quality of its associated words in the graph and they are not flexible enough to incorporate multiple types of information. For instance, nodes in a network may exhibit diverse connectivity patterns which are not captured by the graph-based ranking methods. To address this, we present a new approach to keyphrase extraction that represents the document as a word graph and exploits its structure in order to reveal underlying explanatory factors hidden in the data that may distinguish keyphrases from non-keyphrases. Experimental results show that our model, which uses phrase graph representations in a supervised probabilistic framework, obtains remarkable improvements in performance over previous supervised and unsupervised keyphrase extraction systems.
|
4 |
SEMKEYPHRASE: A NOVEL UNSUPERVISED APPROACH FOR KEYPHRASE EXTRACTION FROM MOOC VIDEO LECTURESAlbahr, Abdulaziz Ali 01 December 2019 (has links) (PDF)
In massive open online courses (MOOCs), a pressing need for an efficient automated approach of identifying keyphrases from MOOC video lectures has emerged. Because of the linear structure of MOOCs and the linear way in navigating the content of MOOCs, learners have difficulty to know the main knowledge addressed in MOOC video lectures and spend too much time navigating among to find the right content matching their learning goals. A feasible solution is automatic provision of keyphrases associated with MOOC video lectures that can help learners quickly identify a suitable knowledge and efficiently navigate to desired parts of MOOC video lectures without spending too much time to expedite their learning process. Keyphrases in MOOCs demonstrate three unique features: (1) low-frequency occurrence, (2) advanced scientific or technical concepts, and (3) late occurrence. Existing approaches to automatic keyphrases extraction (either supervised or unsupervised) do not consider these unique features, causing them to produce unsatisfactory performance when utilized to extract keyphrases from MOOC video lectures. In this dissertation, we propose $SemKeyphrase$, an unsupervised cluster-based approach for keyphrase extraction from MOOC video lectures. $SemKeyphrase$ incorporates a new semantic relatedness method and ranking algorithm, called $PhraseRank$. The proposed semantic relatedness method incorporates a novel metric that combines two scores ($WSem$ and $CSem$) to efficiently compute the semantic relatedness between candidate keyphrases in MOOCs. The $PhraseRank$ algorithm involves two phases when ranking candidate keyphrases: ranking clusters and reranking top candidate keyphrases. The first phase of $PhraseRank$ leverages the semantic relatedness of candidate keyphrases with regard to the subtopics of a MOOC video lecture to measure the importance of candidate keyphrases, which are further used to rank clusters of candidate keyphrases. Top candidate keyphrases from top-ranked clusters are then determined by a proposed selection strategy. The second phase of $PhraseRank$ reranks the top candidate keyphrases using a new ranking criterion and generates ranked top-K keyphrases as the final output. Experiment results on a real-world dataset of MOOC video lectures show that $SemKeyphrase$ outperforms other state-of-the-art methods.
|
5 |
Automatic structure and keyphrase analysis of scientific publicationsConstantin, Alexandru January 2014 (has links)
Purpose. This work addresses an escalating problem within the realm of scientific publishing, that stems from accelerated publication rates of article formats difficult to process automatically. The amount of manual labour required to organise a comprehensive corpus of relevant literature has long been impractical. This has, in effect, reduced research efficiency and delayed scientific advancement. Two complementary approaches meant to alleviate this problem are detailed and improved upon beyond the current state-of-the-art, namely logical structure recovery of articles and keyphrase extraction. Methodology. The first approach targets the issue of flat-format publishing. It performs a structural analysis of the camera-ready PDF article and recognises its fine-grained organisation over logical units. The second approach is the application of a keyphrase extraction algorithm that relies on rhetorical information from the recovered structure to better contour an article’s true points of focus. A recount of the scientific article’s function, content and structure is provided, along with insights into how different logical components such as section headings or the bibliography can be automatically identified and utilised for higher-quality keyphrase extraction. Findings. Structure recovery can be carried out independently of an article’s formatting specifics, by exploiting conventional dependencies between logical components. In addition, access to an article’s logical structure is beneficial across term extraction approaches, reducing input noise and facilitating the emphasis of regions of interest. Value. The first part of this work details a novel method for recovering the rhetorical structure of scientific articles that is competitive with state-of-the-art machine learning techniques, yet requires no layout-specific tuning or prior training. The second part showcases a keyphrase extraction algorithm that outperforms other solutions in an established benchmark, yet does not rely on collection statistics or external knowledge sources in order to be proficient.
|
6 |
Exploiting email : extracting knowledge to support knowledge sharingTedmori, Sara January 2008 (has links)
Effective management of knowledge assets is key to surviving in today's competitive business environment. This is particularly true for large organisations, where employees have difficulties identifying where or with whom the knowledge lies. Expertise is one of the most important knowledge assets and largely resides in the heads of employees. Many attempts have been made to help locate employees with the right expertise; however, the existing systems (often referred to as expertise finding systems) carry several flaws. In organisations, there are several potential sources where expertise evidence might be found. These sources have been used by the existing approaches to profile employees' expertise. Unfortunately, there has been limited research showing whether these sources contain useful evidence of expertise. Moreover, the majority of existing approaches have not been designed to integrate with the organisations' work practices; nor have they investigated the socio-ethical challenges associated with the adoption of such systems. Therefore, there is a need for expert finding systems that utilise useful sources of expertise and integrate into existing work practices. Through industry involvement, this research has explored and validated email content as a source for expertise profiling. This thesis provides an overview of the traditional and current approaches to expertise finding. The development and implementation of the EKE (Email Knowledge Extraction) system which tries to overcome the aforementioned challenges is presented. EKE has been evaluated by end-users from both industry and academia. The evaluation results suggest that EKE is a useful system that encourages participation, and that in many cases may assist in the management of knowledge within organisations.
|
7 |
Technical Term Extraction Using Measures of Neology / Facktermsdetektering medelst neologiska kriteriaNorman, Christopher January 2016 (has links)
This study aims to show that frequency of occurrence over time for technical terms differs from general language terms in the sense that technical terms are strongly biased to be recent occurrences, and that this difference can be exploited for the automatic identification and extraction of technical terms from text. To this end, we propose two features extracted from temporally labelled datasets designed to capture surface level n-gram neology. The analysis shows that these features, calculated over consecutive bigrams, are highly indicative of technical terms, which suggests that technical terms are strongly biased to be surface level neologisms. Finally, we implement a technical term extractor using the proposed features and compare its performance against a number of baselines. / Detta arbete ämnar visa att den tidsberoende frekvensen för facktermer skiljer sig från motsvarande frekvens för termer i vardagligt språk, i det avseendet att facktermer med hög sannolikhet är lingvistiska nybildningar, samt att denna iaktagelse kan nyttjas i syfte att automatiskt identifiera och extrahera facktermer i löptext. I detta syfte introducerar vi två särdrag extraherade från kronologiskt annoterade datamängder avsedda att fånga nybildningar av förekommande n-gram. Analysen visar att dessa särdrag, beräknade över konsekutiva bigram, är starkt indikativa för facktermer, vilket antyder att facktermer har en starkt tendens att vara nybildningar. Slutligtvis implementerar vi en facktermsextraktor baserad på dessa särdrag och jämför dess prestanda med ett antal referenssärdrag.
|
8 |
Text Analysis in Fashion : Keyphrase ExtractionLin, Yuhao January 2020 (has links)
The ability to extract useful information from texts and present them in the form of structured attributes is an important step towards making product comparison algorithm in fashion smarter and better. Some previous work exploits statistical features like the word frequency and graph models to predict keyphrases. In recent years, deep neural networks have proved to be the state-of-the-art methods to handle language modeling. Successful examples include Long Short Term Memory (LSTM), Gated Recurrent Units (GRU), Bidirectional Encoder Representations from Transformers(BERT) and their variations. In addition, some word embedding techniques like word2vec[1] are also helpful to improve the performance. Besides these techniques, a high-quality dataset is also important to the effectiveness of models. In this project, we aim to develop reliable and efficient machine learning models for keyphrase extraction. At Norna AB, we have a collection of product descriptions from different vendors without keyphrase annotations, which motivates the use of unsupervised methods. They should be capable of extracting useful keyphrases that capture the features of a product. To further explore the power of deep neural networks, we also implement several deep learning models. The dataset has two parts, the first part is called the fashion dataset where keyphrases are extracted by our unsupervised method. The second part is a public dataset in the domain of news. We find that deep learning models are also capable of extracting meaningful keyphrases and outperform the unsupervised model. Precision, recall and F1 score are used as evaluation metrics. The result shows that the model that uses LSTM and CRF achieves the optimal performance. We also compare the performance of different models with respect to keyphrase lengths and keyphrase numbers. The result indicates that all models perform better on predicting short keyphrases. We also show that our refined model has the advantage of predicting long keyphrases, which is challenging in this field. / Förmågan att extrahera användbar information från texter och presentera den i form av strukturerade attribut är ett viktigt steg mot att göra produktjämförelsesalgoritmen på ett smartare och bättre sätt. Vissa tidigare arbeten utnyttjar statistiska funktioner som ordfrekvens och grafmodeller för att förutsäga nyckelfraser. Under de senaste åren har djupa neurala nätverk visat sig vara de senaste metoderna för att hantera språkmodellering. Framgångsrika exempel inkluderar Long Short Term Memory (LSTM), Gated Recurrent Units (GRU), Bidirectional Encoder Representations from Transformers (BERT) och deras variationer. Dessutom kan vissa ordinbäddningstekniker som word2vec[1] också vara till hjälp för att förbättra prestandan. Förutom dessa tekniker är en datauppsättning av hög kvalitet också viktig för modellernas effektivitet. I detta projekt strävar vi efter att utveckla pålitliga och effektiva maskininlärningsmodeller för utvinning av nyckelfraser. På Norna AB har vi en samling produktbeskrivningar från olika leverantörer utan nyckelfrasnoteringar, vilket motiverar användningen av metoder utan tillsyn. De bör kunna extrahera användbara nyckelfraser som fångar funktionerna i en produkt. För att ytterligare utforska kraften i djupa neurala nätverk implementerar vi också flera modeller för djupinlärning. Datasetet har två delar, den första delen kallas modedataset där nyckelfraser extraheras med vår metod utan tillsyn. Den andra delen är en offentlig dataset i nyhetsdomänen. Vi finner att deep learning-modeller också kan extrahera meningsfulla nyckelfraser och överträffa den oövervakade modellen. Precision, återkallning och F1-poäng används som utvärderingsmått. Resultatet visar att modellen som använder LSTM och CRF uppnår optimal prestanda. Vi jämför också prestanda för olika modeller med avseende på keyphrase längder och nyckelfras nummer. Resultatet indikerar att alla modeller presterar bättre på att förutsäga korta tangentfraser. Vi visar också att vår raffinerade modell har fördelen att förutsäga långa tangentfraser, vilket är utmanande inom detta område.
|
9 |
Large Language Models for Unsupervised Keyphrase Extraction and Biomedical Data AnalyticsHaoran Ding (18825838) 03 September 2024 (has links)
<p dir="ltr">Natural Language Processing (NLP), a vital branch of artificial intelligence, is designed to equip computers with the ability to comprehend and manipulate human language, facilitating the extraction and utilization of textual data. NLP plays a crucial role in harnessing the vast quantities of textual data generated daily, facilitating meaningful information extraction. Among the various techniques, keyphrase extraction stands out due to its ability to distill concise information from extensive texts, making it invaluable for summarizing and navigating content efficiently. The process of keyphrase extraction usually begins by generating candidates first and then ranking them to identify the most relevant phrases. Keyphrase extraction can be categorized into supervised and unsupervised approaches. Supervised methods typically achieve higher accuracy as they are trained on labeled data, which allows them to effectively capture and utilize patterns recognized during training. However, the dependency on extensive, well-annotated datasets limits their applicability in scenarios where such data is scarce or costly to obtain. On the other hand, unsupervised methods, while free from the constraints of labeled data, face challenges in capturing deep semantic relationships within text, which can impact their effectiveness. Despite these challenges, unsupervised keyphrase extraction holds significant promise due to its scalability and lower barriers to entry, as it does not require labeled datasets. This approach is increasingly favored for its potential to aid in building extensive knowledge bases from unstructured data, which can be particularly useful in domains where acquiring labeled data is impractical. As a result, unsupervised keyphrase extraction is not only a valuable tool for information retrieval but also a pivotal technology for the ongoing expansion of knowledge-driven applications in NLP.</p><p dir="ltr">In this dissertation, we introduce three innovative unsupervised keyphrase extraction methods: AttentionRank, AGRank, and LLMRank. Additionally, we present a method for constructing knowledge graphs from unsupervised keyphrase extraction, leveraging the self-attention mechanism. The first study discusses the AttentionRank model, which utilizes a pre-trained language model to derive underlying importance rankings of candidate phrases through self-attention. This model employs a cross-attention mechanism to assess the semantic relevance between each candidate phrase and the document, enhancing the phrase ranking process. AGRank, detailed in the second study, is a sophisticated graph-based framework that merges deep learning techniques with graph theory. It constructs a candidate phrase graph using mutual attentions from a pre-trained language model. Both global document information and local phrase details are incorporated as enhanced nodes within the graph, and a graph algorithm is applied to rank the candidate phrases. The third study, LLMRank, leverages the strengths of large language models (LLMs) and graph algorithms. It employs LLMs to generate keyphrase candidates and then integrates global information through the text's graphical structures. This process reranks the candidates, significantly improving keyphrase extraction performance. The fourth study explores how self-attention mechanisms can be used to extract keyphrases from medical literature and generate query-related phrase graphs, improving text retrieval visualization. The mutual attentions of medical entities, extracted using a pre-trained model, form the basis of the knowledge graph. This, coupled with a specialized retrieval algorithm, allows for the visualization of long-range connections between medical entities while simultaneously displaying the supporting literature. In summary, our exploration of unsupervised keyphrase extraction and biomedical data analysis introduces novel methods and insights in NLP, particularly in information extraction. These contributions are crucial for the efficient processing of large text datasets and suggest avenues for future research and applications.</p>
|
10 |
Distributed Document Clustering and Cluster Summarization in Peer-to-Peer EnvironmentsHammouda, Khaled M. January 2007 (has links)
This thesis addresses difficult challenges in distributed document clustering and cluster summarization. Mining large document collections poses many challenges, one of which is the extraction of topics or summaries from documents for the purpose of interpretation of clustering results. Another important challenge, which is caused by new trends in distributed repositories and peer-to-peer computing, is that document data is becoming more distributed.
We introduce a solution for interpreting document clusters using keyphrase extraction from multiple documents simultaneously. We also introduce two solutions for the problem of distributed document clustering in peer-to-peer environments, each satisfying a different goal: maximizing local clustering quality through collaboration, and maximizing global clustering quality through cooperation.
The keyphrase extraction algorithm efficiently extracts and scores candidate keyphrases from a document cluster. The algorithm is called CorePhrase and is based on modeling document collections as a graph upon which we can leverage graph mining to extract frequent and significant phrases, which are used to label the clusters. Results show that CorePhrase can extract keyphrases relevant to documents in a cluster with very high accuracy. Although this algorithm can be used to summarize centralized clusters, it is specifically employed within distributed clustering to both boost distributed clustering accuracy, and to provide summaries for distributed clusters.
The first method for distributed document clustering is called collaborative peer-to-peer document clustering, which models nodes in a peer-to-peer network as collaborative nodes with the goal of improving the quality of individual local clustering solutions. This is achieved through the exchange of local cluster summaries between peers, followed by recommendation of documents to be merged into remote clusters. Results on large sets of distributed document collections show that: (i) such collaboration technique achieves significant improvement in the final clustering of individual nodes; (ii) networks with larger number of nodes generally achieve greater improvements in clustering after collaboration relative to the initial clustering before collaboration, while on the other hand they tend to achieve lower absolute clustering quality than networks with fewer number of nodes; and (iii) as more overlap of the data is introduced across the nodes, collaboration tends to have little effect on improving clustering quality.
The second method for distributed document clustering is called hierarchically-distributed document clustering. Unlike the collaborative model, this model aims at producing one clustering solution across the whole network. It specifically addresses scalability of network size, and consequently the distributed clustering complexity, by modeling the distributed clustering problem as a hierarchy of node neighborhoods. Summarization of the global distributed clusters is achieved through a distributed version of the CorePhrase algorithm. Results on large document sets show that: (i) distributed clustering accuracy is not affected by increasing the number of nodes for networks of single level; (ii) we can achieve decent speedup by making the hierarchy taller, but on the expense of clustering quality which degrades as we go up the hierarchy; (iii) in networks that grow arbitrarily, data gets more fragmented across neighborhoods causing poor centroid generation, thus suggesting we should not increase the number of nodes in the network beyond a certain level without increasing the data set size; and (iv) distributed cluster summarization can produce accurate summaries similar to those produced by centralized summarization.
The proposed algorithms offer high degree of flexibility, scalability, and interpretability of large distributed document collections. Achieving the same results using current methodologies require centralization of the data first, which is sometimes not feasible.
|
Page generated in 0.1187 seconds