• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 19
  • 8
  • 1
  • 1
  • 1
  • 1
  • 1
  • Tagged with
  • 36
  • 36
  • 11
  • 9
  • 9
  • 8
  • 7
  • 6
  • 6
  • 6
  • 5
  • 5
  • 5
  • 4
  • 4
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
21

Algoritmos de seleção de características personalizados por classe para categorização de texto

FRAGOSO, Rogério César Peixoto 26 August 2016 (has links)
Submitted by Rafael Santana (rafael.silvasantana@ufpe.br) on 2017-08-31T19:39:48Z No. of bitstreams: 2 license_rdf: 811 bytes, checksum: e39d27027a6cc9cb039ad269a5db8e34 (MD5) Rogerio_Fragoso.pdf: 1117500 bytes, checksum: 3e7915ee5c34322de3a8358d59679961 (MD5) / Made available in DSpace on 2017-08-31T19:39:48Z (GMT). No. of bitstreams: 2 license_rdf: 811 bytes, checksum: e39d27027a6cc9cb039ad269a5db8e34 (MD5) Rogerio_Fragoso.pdf: 1117500 bytes, checksum: 3e7915ee5c34322de3a8358d59679961 (MD5) Previous issue date: 2016-08-26 / A categorização de textos é uma importante ferramenta para organização e recuperação de informações em documentos digitais. Uma abordagem comum é representar cada palavra como uma característica. Entretanto, a maior parte das características em um documento textual são irrelevantes para sua categorização. Assim, a redução de dimensionalidade é um passo fundamental para melhorar o desempenho de classificação e reduzir o alto custo computacional inerente a problemas de alta dimensionalidade, como é o caso da categorização de textos. A estratégia mais utilizada para redução de dimensionalidade em categorização de textos passa por métodos de seleção de características baseados em filtragem. Métodos deste tipo exigem um esforço para configurar o tamanho do vetor final de características. Este trabalho propõe métodos de filtragem com o intuito melhorar o desempenho de classificação em comparação com os métodos atuais e de tornar possível a automatização da escolha do tamanho do vetor final de características. O primeiro método proposto, chamado Category-dependent Maximum f Features per Document-Reduced (cMFDR), define um limiar para cada categoria para determinar quais documentos serão considerados no processo de seleção de características. O método utiliza um parâmetro para definir quantas características são selecionadas por documento. Esta abordagem apresenta algumas vantagens, como a simplificação do processo de escolha do subconjunto mais efetivo através de uma drástica redução da quantidade de possíveis configurações. O segundo método proposto, Automatic Feature Subsets Analyzer (AFSA), introduz um procedimento para determinar, de maneira guiada por dados, o melhor subconjunto de características dentre um número de subconjuntos gerados. Este método utiliza o mesmo parâmetro usado por cMFDR para definir a quantidade de características no vetor final. Isto permite que a busca pelo melhor subconjunto tenha um baixo custo computacional. O desempenho dos métodos propostos foram avaliados nas bases de dados WebKB, Reuters, 20 Newsgroup e TDT2, utilizando as funções de avaliação de características Bi-Normal Separation, Class Discriminating Measure e Chi-Squared Statistics. Os resultados dos experimentos demonstraram uma maior efetividade dos métodos propostos em relação aos métodos do estado da arte. / Text categorization is an important technic to organize and retrieve information from digital documents. A common approach is to represent each word as a feature. However most of the features in a textual document is irrelevant to its categorization. Thus, dimensionality reduction is a fundamental step to improve classification performance and diminish the high computational cost inherent to high dimensional problems, such as text categorization. The most commonly adopted strategy for dimensionality reduction in text categorization undergoes feature selection methods based on filtering. This kind of method requires an effort to configure the size of the final feature vector. This work proposes filtering methods aiming to improve categorization performence comparing to state-of-the-art methods and to provide a possibility of automitic determination of the size of the final feature set. The first proposed method, namely Category-dependent Maximum f Features per Document-Reduced (cMFDR), sets a threshold for each category that determines which documents are considered in feature selection process. The method uses a parameter to arbitrate how many features are selected per document. This approach presents some advantages, such as simplifying the process of choosing the most effective subset through a strong reduction of the number of possible configurations. The second proposed method, Automatic Feature Subsets Analyzer (AFSA), presents a procedure to determine, in a data driven way, the most effective subset among a number of generated subsets. This method uses the same parameter used by cMFDR to define the size of the final feature vector. This fact leads to lower computational costs to find the most effective set. The performance of the proposed methods was assessed in WebKB, Reuters, 20 Newsgroup and TDT2 datasets, using Bi-Normal Separation, Class Discriminating Measure and Chi-Squared Statistics feature evaluations functions. The experimental results demonstrates that the proposed methods are more effective than state-of-art methods.
22

Multi-Class Classification of Textual Data: Detection and Mitigation of Cheating in Massively Multiplayer Online Role Playing Games

Maguluri, Naga Sai Nikhil 10 May 2017 (has links)
No description available.
23

With or without context : Automatic text categorization using semantic kernels

Eklund, Johan January 2016 (has links)
In this thesis text categorization is investigated in four dimensions of analysis: theoretically as well as empirically, and as a manual as well as a machine-based process. In the first four chapters we look at the theoretical foundation of subject classification of text documents, with a certain focus on classification as a procedure for organizing documents in libraries. A working hypothesis used in the theoretical analysis is that classification of documents is a process that involves translations between statements in different languages, both natural and artificial. We further investigate the close relationships between structures in classification languages and the order relations and topological structures that arise from classification. A classification algorithm that gets a special focus in the subsequent chapters is the support vector machine (SVM), which in its original formulation is a binary classifier in linear vector spaces, but has been extended to handle classification problems for which the categories are not linearly separable. To this end the algorithm utilizes a category of functions called kernels, which induce feature spaces by means of high-dimensional and often non-linear maps. For the empirical part of this study we investigate the classification performance of semantic kernels generated by different measures of semantic similarity. One category of such measures is based on the latent semantic analysis and the random indexing methods, which generates term vectors by using co-occurrence data from text collections. Another semantic measure used in this study is pointwise mutual information. In addition to the empirical study of semantic kernels we also investigate the performance of a term weighting scheme called divergence from randomness, that has hitherto received little attention within the area of automatic text categorization. The result of the empirical part of this study shows that the semantic kernels generally outperform the “standard” (non-semantic) linear kernel, especially for small training sets. A conclusion that can be drawn with respect to the investigated datasets is therefore that semantic information in the kernel in general improves its classification performance, and that the difference between the standard kernel and the semantic kernels is particularly large for small training sets. Another clear trend in the result is that the divergence from randomness weighting scheme yields a classification performance surpassing that of the common tf-idf weighting scheme.
24

Objektivita médií / Media Bias in Czech Newspapers

Žiačiková, Zuzana January 2011 (has links)
The purpose of this thesis is to examine the media bias of the selected Czech newspapers. The study is conducted through a modified method for the measurement of the media bias, introduced by Gentzkow and Shapiro (2010). The media bias here means the use of the politically loaded language in the newspapers. The method is based on the comparison between the partisan language of the political parties represented in Parliament and language of the selected newspapers. This is carried out by comparing the phrases frequencies in the parliamentary speeches assigned to a coalition or an opposition and in the newspaper content. The analysis of the statistical dependence of newspaper's language and the partisan speech provides the evidence that Hospodářské noviny and Lidové noviny use language more similar to a coalition. Content of Mladá fronta Dnes, Blesk, Právo a Haló appear to be closer to an opposition parties.
25

Statistical pattern recognition approaches for retrieval-based machine translation systems

Mansjur, Dwi Sianto 01 November 2011 (has links)
This dissertation addresses the problem of Machine Translation (MT), which is defined as an automated translation of a document written in one language (the source language) to another (the target language) by a computer. The MT task requires various types of knowledge of both the source and target language, e.g., linguistic rules and linguistic exceptions. Traditional MT systems rely on an extensive parsing strategy to decode the linguistic rules and use a knowledge base to encode those linguistic exceptions. However, the construction of the knowledge base becomes an issue as the translation system grows. To overcome this difficulty, real translation examples are used instead of a manually-crafted knowledge base. This design strategy is known as the Example-Based Machine Translation (EBMT) principle. Traditional EBMT systems utilize a database of word or phrase translation pairs. The main challenge of this approach is the difficulty of combining the word or phrase translation units into a meaningful and fluent target text. A novel Retrieval-Based Machine Translation (RBMT) system, which uses a sentence-level translation unit, is proposed in this study. An advantage of using the sentence-level translation unit is that the boundary of a sentence is explicitly defined and the semantic, or meaning, is precise in both the source and target language. The main challenge of using a sentential translation unit is the limited coverage, i.e., the difficulty of finding an exact match between a user query and sentences in the source database. Using an electronic dictionary and a topic modeling procedure, we develop a procedure to obtain clusters of sensible variations for each example in the source database. The coverage of our MT system improves because an input query text is matched against a cluster of sensible variations of translation examples instead of being matched against an original source example. In addition, pattern recognition techniques are used to improve the matching procedure, i.e., the design of optimal pattern classifiers and the incorporation of subjective judgments. A high performance statistical pattern classifier is used to identify the target sentences from an input query sentence in our MT system. The proposed classifier is different from the conventional classifier in terms of the way it addresses the generalization capability. A conventional classifier addresses the generalization issue using the parsimony principle and may encounter the possibility of choosing an oversimplified statistical model. The proposed classifier directly addresses the generalization issue in terms of training (empirical) data. Our classifier is expected to generalize better than the conventional classifiers because our classifier is less likely to use over-simplified statistical models based on the available training data. We further improve the matching procedure by the incorporation of subjective judgments. We formulate a novel cost function that combines subjective judgments and the degree of matching between translation examples and an input query. In addition, we provide an optimization strategy for the novel cost function so that the statistical model can be optimized according to the subjective judgments.
26

Medida de certeza na categorização multi-rótulo de texto e sua utilização como estratégia de poda do ranking de categorias

Souza, Caribe Zampirolli de 27 August 2010 (has links)
Made available in DSpace on 2016-12-23T14:33:42Z (GMT). No. of bitstreams: 1 Dissertacao de Caribe Zampirolli de Souza.pdf: 1221547 bytes, checksum: 1e22f89c93c3423e4143b9ac13eeb1c6 (MD5) Previous issue date: 2010-08-27 / A multi-label text categorization system typically computes degrees of belief when it comes to the categories of a pre-defined set, orders the categories by degree of belief, and attributes to the document categories with a higher degree of belief to determined threshold cut. It would be ideal if the degree of belief could inform the probability of the document be part of this category. Unfortunately, there isn t a categorization system that computes such probabilities and to map degrees of belief in probabilities is still a problem that isn`t well explored in IR. In this paper we propose a method based on Bayes rules to map degrees of belief in terms of multi-label text measures of categorization. There are other contributions in this work such as an strategy to determine the limits of threshold cut based on bayesian cut (BCut) and a variant for PBCut (position based bayesian CUT ). As an experience, we evaluated the impact of the proposed methods when performing the two techniques of the multi-label text categorization. The first technique is called knearest neighbor multi-label (ML-KNN) and the second technique is called VG-RAM weightless Neural Networks. Theses evaluations were made in the context of the categorization of economic activities description of Brazilian enterprises, according to the Economic Activities Classification in Brazil (CNAE). In this work we also investigated the impact in the performance of multi-label text categorization of the three cut methods commonly used in the IR literature: RCut, PCut, SCut and RTCut. Moreover, we propose a new variant for the so called PCut* and a new variant for SCut*. Finally, this work shows that the cut approach proposed, BCut and PBCut, produces a categorization performance superior to the other strategies presented in the literature of IR / Dado um documento de entrada, um sistema de categorização multi-rótulo de texto tipicamente computa graus de crença para as categorias de um conjunto pré-definido, ordena as categorias por grau de crença, e atribui ao documento as categorias com grau de crença superior a um determinado limiar de poda. Idealmente, o grau de crença deveria informar a probabilidade do documento de fato pertencer à categoria. Infelizmente, ainda não existem categorizadores que computam tais probabilidades e mapear graus de crença em probabilidades é um problema ainda pouco explorado na área de RI. Neste trabalho, propomos um método baseado na regra de Bayes para mapear graus de crença em medidas de certeza de categorização multi-rótulo de texto. Propomos também uma estratégia para determinar limiares de poda baseada na medida de certeza de categorização - bayesian cut (BCut) - e uma variante para BCut - position based bayesian CUT (PBCut). Avaliamos experimentalmente o impacto dos métodos propostos no desempenho de duas técnicas de categorização multi-rótulo de texto, k-vizinhos mais próximos multi-rótulo (MLkNN) e rede neural sem peso do tipo VG-RAM com correlação de dados (VG-RAM WNNCOR), no contexto da categorização de descrições de atividades econômicas de empresas brasileiras segundo a Classificação Nacional de Atividades Econômicas (CNAE). Investigamos também o impacto no desempenho de categorização multi-rótulo de texto de três métodos de poda comumente usados na literatura de RI - RCut, PCut, e SCut e uma variante de RCut - RTCut. Além disso, propomos novas variantes para PCut e SCut PCut* e SCut*, respectivamente para tratar problemas existentes nestas abordagens. Nossos resultados experimentais mostram que, usando nosso método de geração de medidas de certeza de categorização, é possível prever o quão certo está o categorizador de que as ategorias por ele preditas são de fato pertinentes para um dado documento. Nossos resultados mostram também que o uso de nossas estratégias de poda BCut e PBCut produz desempenho de categorização superior ao de todas as outras estratégias consideradas em termos de precisão
27

A Framework of Automatic Subject Term Assignment: An Indexing Conception-Based Approach

Chung, EunKyung 12 1900 (has links)
The purpose of dissertation is to examine whether the understandings of subject indexing processes conducted by human indexers have a positive impact on the effectiveness of automatic subject term assignment through text categorization (TC). More specifically, human indexers' subject indexing approaches or conceptions in conjunction with semantic sources were explored in the context of a typical scientific journal article data set. Based on the premise that subject indexing approaches or conceptions with semantic sources are important for automatic subject term assignment through TC, this study proposed an indexing conception-based framework. For the purpose of this study, three hypotheses were tested: 1) the effectiveness of semantic sources, 2) the effectiveness of an indexing conception-based framework, and 3) the effectiveness of each of three indexing conception-based approaches (the content-oriented, the document-oriented, and the domain-oriented approaches). The experiments were conducted using a support vector machine implementation in WEKA (Witten, & Frank, 2000). The experiment results pointed out that cited works, source title, and title were as effective as the full text, while keyword was found more effective than the full text. In addition, the findings showed that an indexing conception-based framework was more effective than the full text. Especially, the content-oriented and the document-oriented indexing approaches were found more effective than the full text. Among three indexing conception-based approaches, the content-oriented approach and the document-oriented approach were more effective than the domain-oriented approach. In other words, in the context of a typical scientific journal article data set, the objective contents and authors' intentions were more focused that the possible users' needs. The research findings of this study support that incorporation of human indexers' indexing approaches or conception in conjunction with semantic sources has a positive impact on the effectiveness of automatic subject term assignment.
28

[en] A STUDY OF MULTILABEL TEXT CLASSIFICATION ALGORITHMS USING NAIVE-BAYES / [pt] UM ESTUDO DE ALGORITMOS PARA CLASSIFICAÇÃO AUTOMÁTICA DE TEXTOS UTILIZANDO NAIVE-BAYES

DAVID STEINBRUCH 12 March 2007 (has links)
[pt] A quantidade de informação eletrônica vem crescendo de forma acelerada, motivada principalmente pela facilidade de publicação e divulgação que a Internet proporciona. Desta forma, é necessária a organização da informação de forma a facilitar a sua aquisição. Muitos trabalhos propuseram resolver este problema através da classificação automática de textos associando a eles vários rótulos (classificação multirótulo). No entanto, estes trabalhos transformam este problema em subproblemas de classificação binária, considerando que existe independência entre as categorias. Além disso, utilizam limiares (thresholds), que são muito específicos para o conjunto de treinamento utilizado, não possuindo grande capacidade de generalização na aprendizagem. Esta dissertação propõe dois algoritmos de classificação automática de textos baseados no algoritmo multinomial naive Bayes e sua utilização em um ambiente on-line de classificação automática de textos com realimentação de relevância pelo usuário. Para testar a eficiência dos algoritmos propostos, foram realizados experimentos na base de notícias Reuters 21758 e na base de documentos médicos Ohsumed. / [en] The amount of electronic information has been growing fast, mainly due to the easiness of publication and spreading that Internet provides. Therefore, is necessary the organisation of information to facilitate its retrieval. Many works have solved this problem through the automatic text classification, associating to them several labels (multilabel classification). However, those works have transformed this problem into binary classification subproblems, considering there is not dependence among categories. Moreover, they have used thresholds, which are very sepecific of the classifier document base, and so, does not have great generalization capacity in the learning process. This thesis proposes two text classifiers based on the multinomial algorithm naive Bayes and its usage in an on-line text classification environment with user relevance feedback. In order to test the proposed algorithms efficiency, experiments have been performed on the Reuters 21578 news base, and on the Ohsumed medical document base.
29

Modelos de tópicos na classificação automática de resenhas de usuários. / Topic models in user review automatic classification.

Mauá, Denis Deratani 14 August 2009 (has links)
Existe um grande número de resenhas de usuário na internet contendo valiosas informações sobre serviços, produtos, política e tendências. A compreensão automática dessas opiniões é não somente cientificamente interessante, mas potencialmente lucrativa. A tarefa de classificação de sentimentos visa a extração automática das opiniões expressas em documentos de texto. Diferentemente da tarefa mais tradicional de categorização de textos, na qual documentos são classificados em assuntos como esportes, economia e turismo, a classificação de sentimentos consiste em anotar documentos com os sentimentos expressos no texto. Se comparados aos classificadores tradicionais, os classificadores de sentimentos possuem um desempenho insatisfatório. Uma das possíveis causas do baixo desempenho é a ausência de representações adequadas que permitam a discriminação das opiniões expressas de uma forma concisa e própria para o processamento de máquina. Modelos de tópicos são modelos estatísticos que buscam extrair informações semânticas ocultas na grande quantidade de dados presente em coleções de texto. Eles representam um documento como uma mistura de tópicos, onde cada tópico é uma distribuição de probabilidades sobre palavras. Cada distribuição representa um conceito semântico implícito nos dados. Modelos de tópicos, as palavras são substituídas por tópicos que representam seu significado de forma sucinta. De fato, os modelos de tópicos realizam uma redução de dimensionalidade nos dados que pode levar a um aumento do desempenho das técnicas de categorização de texto e recuperação de informação. Na classificação de sentimentos, eles podem fornecer a representação necessária através da extração de tópicos que representem os sentimentos expressos no texto. Este trabalho dedica-se ao estudo da aplicação de modelos de tópicos na representação e classificação de sentimentos de resenhas de usuário. Em particular, o modelo Latent Dirichlet Allocation (LDA) e quatro extensões (duas delas desenvolvidas pelo autor) são avaliados na tarefa de classificação de sentimentos baseada em múltiplos aspectos. As extensões ao modelo LDA permitem uma investigação dos efeitos da incorporação de informações adicionais como contexto, avaliações de aspecto e avaliações de múltiplos aspectos no modelo original. / There is a large number of user reviews on the internet with valuable information on services, products, politics and trends. There is both scientific and economic interest in the automatic understanding of such data. Sentiment classification is concerned with automatic extraction of opinions expressed in user reviews. Unlike standard text categorization tasks that deal with the classification of documents into subjects such as sports, economics and tourism, sentiment classification attempts to tag documents with respect to the feelings they express. Compared to the accuracy of standard methods, sentiment classifiers have shown poor performance. One possible cause of such a poor performance is the lack of adequate representations that lead to opinion discrimination in a concise and machine-readable form. Topic Models are statistical models concerned with the extraction of semantic information hidden in the large number of data available in text collections. They represent a document as a mixture of topics, probability distributions over words that represent a semantic concept. According to Topic Model representation, words can be substituted by topics able to represent concisely its meaning. Indeed, Topic Models perform a data dimensionality reduction that can improve the performance of text classification and information retrieval techniques. In sentiment classification, they can provide the necessary representation by extracting topics that represent the general feelings expressed in text. This work presents a study of the use of Topic Models for representing and classifying user reviews with respect to their feelings. In particular, the Latent Dirichlet Allocation (LDA) model and four extensions (two of them developed by the author) are evaluated on the task of aspect-based sentiment classification. The extensions to the LDA model enables us to investigate the effects of the incorporation of additional information such as context, aspect rating and multiple aspect rating into the original model.
30

Προσωποποιημένη προβολή περιεχομένου του Διαδικτύου με τεχνικές προ-επεξεργασίας, αυτόματης κατηγοριοποίησης και αυτόματης εξαγωγής περίληψης

Πουλόπουλος, Βασίλειος 22 November 2007 (has links)
Σκοπός της Μεταπτυχιακής Εργασίας είναι η επέκταση και αναβάθμιση του μηχανισμού που είχε δημιουργηθεί στα πλαίσια της Διπλωματικής Εργασίας που εκπόνησα με τίτλο «Δημιουργία Πύλης Προσωποποιημένης Πρόσβασης σε Περιεχόμενο του WWW». Η παραπάνω Διπλωματική εργασία περιλάμβανε τη δημιουργία ενός μηχανισμού που ξεκινούσε με ανάκτηση πληροφορίας από το Διαδίκτυο (HTML σελίδες από news portals), εξαγωγή χρήσιμου κειμένου και προεπεξεργασία της πληροφορίας, αυτόματη κατηγοριοποίηση της πληροφορίας και τέλος παρουσίαση στον τελικό χρήστη με προσωποποίηση με στοιχεία που εντοπίζονταν στις επιλογές του χρήστη. Στην παραπάνω εργασία εξετάστηκαν διεξοδικά θέματα που είχαν να κάνουν με τον τρόπο προεπεξεργασίας της πληροφορίας καθώς και με τον τρόπο αυτόματης κατηγοριοποίησης ενώ υλοποιήθηκαν αλγόριθμοι προεπεξεργασίας πληροφορίας τεσσάρων σταδίων και αλγόριθμος αυτόματης κατηγοριοποίησης βασισμένος σε πρότυπες κατηγορίες. Τέλος υλοποιήθηκε portal το οποίο εκμεταλλευόμενο την επεξεργασία που έχει πραγματοποιηθεί στην πληροφορία παρουσιάζει το περιεχόμενο στους χρήστες προσωποποιημένο βάσει των επιλογών που αυτοί πραγματοποιούν. Σκοπός της μεταπτυχιακής εργασίας είναι η εξέταση περισσοτέρων αλγορίθμων για την πραγματοποίηση της παραπάνω διαδικασίας αλλά και η υλοποίησή τους προκειμένου να γίνει σύγκριση αλγορίθμων και παραγωγή ποιοτικότερου αποτελέσματος. Πιο συγκεκριμένα αναβαθμίζονται όλα τα στάδια λειτουργίας του μηχανισμού. Έτσι, το στάδιο λήψης πληροφορίας βασίζεται σε έναν απλό crawler λήψης HTML σελίδων από αγγλόφωνα news portals. Η διαδικασία βασίζεται στο γεγονός πως για κάθε σελίδα υπάρχουν RSS feeds. Διαβάζοντας τα τελευταία νέα που προκύπτουν από τις εγγραφές στα RSS feeds μπορούμε να εντοπίσουμε όλα τα URL που περιέχουν HTML σελίδες με τα άρθρα. Οι HTML σελίδες φιλτράρονται προκειμένου από αυτές να γίνει εξαγωγή μόνο του κειμένου και πιο αναλυτικά του χρήσιμου κειμένου ούτως ώστε το κείμενο που εξάγεται να αφορά αποκλειστικά άρθρα. Η τεχνική εξαγωγής χρήσιμου κειμένου βασίζεται στην τεχνική web clipping. Ένας parser, ελέγχει την HTML δομή προκειμένου να εντοπίσει τους κόμβους που περιέχουν μεγάλη ποσότητα κειμένου και βρίσκονται κοντά σε άλλους κόμβους που επίσης περιέχουν μεγάλες ποσότητες κειμένου. Στα εξαγόμενα άρθρα πραγματοποιείται προεπεξεργασία πέντε σταδίων με σκοπό να προκύψουν οι λέξεις κλειδιά που είναι αντιπροσωπευτικές του άρθρου. Πιο αναλυτικά, αφαιρούνται όλα τα σημεία στίξης, όλοι οι αριθμοί, μετατρέπονται όλα τα γράμματα σε πεζά, αφαιρούνται όλες οι λέξεις που έχουν λιγότερους από 4 χαρακτήρες, αφαιρούνται όλες οι κοινότυπες λέξεις και τέλος εφαρμόζονται αλγόριθμοι εύρεσης της ρίζας μίας λέξεις. Οι λέξεις κλειδιά που απομένουν είναι stemmed το οποίο σημαίνει πως από τις λέξεις διατηρείται μόνο η ρίζα. Από τις λέξεις κλειδιά ο μηχανισμός οδηγείται σε δύο διαφορετικά στάδια ανάλυσης. Στο πρώτο στάδιο υπάρχει μηχανισμός ο οποίος αναλαμβάνει να δημιουργήσει μία αντιπροσωπευτική περίληψη του κειμένου ενώ στο δεύτερο στάδιο πραγματοποιείται αυτόματη κατηγοριοποίηση του κειμένου βασισμένη σε πρότυπες κατηγορίες που έχουν δημιουργηθεί από επιλεγμένα άρθρα που συλλέγονται καθ’ όλη τη διάρκεια υλοποίησης του μηχανισμού. Η εξαγωγή περίληψης βασίζεται σε ευρεστικούς αλγορίθμους. Πιο συγκεκριμένα προσπαθούμε χρησιμοποιώντας λεξικολογική ανάλυση του κειμένου αλλά και γεγονότα για τις λέξεις του κειμένου αν δημιουργήσουμε βάρη για τις προτάσεις του κειμένου. Οι προτάσεις με τα μεγαλύτερη βάρη μετά το πέρας της διαδικασίας είναι αυτές που επιλέγονται για να διαμορφώσουν την περίληψη. Όπως θα δούμε και στη συνέχεια για κάθε άρθρο υπάρχει μία γενική περίληψη αλλά το σύστημα είναι σε θέση να δημιουργήσει προσωποποιημένες περιλήψεις για κάθε χρήστη. Η διαδικασία κατηγοριοποίησης βασίζεται στη συσχέτιση συνημίτονου συγκριτικά με τις πρότυπες κατηγορίες. Η κατηγοριοποίηση δεν τοποθετεί μία ταμπέλα σε κάθε άρθρο αλλά μας δίνει τα αποτελέσματα συσχέτισης του άρθρου με κάθε κατηγορία. Ο συνδυασμός των δύο παραπάνω σταδίων δίνει την πληροφορία που εμφανίζεται σε πρώτη φάση στο χρήστη που επισκέπτεται το προσωποποιημένο portal. Η προσωποποίηση στο portal βασίζεται στις επιλογές που κάνουν οι χρήστες, στο χρόνο που παραμένουν σε μία σελίδα αλλά και στις επιλογές που δεν πραγματοποιούν προκειμένου να δημιουργηθεί προφίλ χρήστη και να είναι εφικτό με την πάροδο του χρόνου να παρουσιάζεται στους χρήστες μόνο πληροφορία που μπορεί να τους ενδιαφέρει. / The scope of this MsC thesis is the extension and upgrade of the mechanism that was constructed during my undergraduate studies under my undergraduate thesis entitled “Construction of a Web Portal with Personalized Access to WWW content”. The aforementioned thesis included the construction of a mechanism that would begin with information retrieval from the WWW and would conclude to representation of information through a portal after applying useful text extraction, text pre-processing and text categorization techniques. The scope of the MsC thesis is to locate the problematic parts of the system and correct them with better algorithms and also include more modules on the complete mechanism. More precisely, all the modules are upgraded while more of them are constructed in every aspect of the mechanism. The information retrieval module is based on a simple crawler. The procedure is based on the fact that all the major news portals include RSS feeds. By locating the latest articles that are added to the RSS feeds we are able to locate all the URLs of the HTML pages that include articles. The crawler then visits every simple URL and downloads the HTML page. These pages are filtered by the useful text extraction mechanism in order to extract only the body of the article from the HTML page. This procedure is based on the web-clipping technique. An HTML parser analyzes the DOM model of HTML and locates the nodes (leafs) that include large amounts of text and are close to nodes with large amounts of text. These nodes are considered to include the useful text. In the extracted useful text we apply a 5 level preprocessing technique in order to extract the keywords of the article. More analytically, we remove the punctuation, the numbers, the words that are smaller than 4 letters, the stopwords and finally we apply a stemming algorithm in order to produce the root of the word. The keywords are utilized into two different interconnected levels. The first is the categorization subsystem and the second is the summarization subsystem. During the summarization stage the system constructs a summary of the article while the second stage tries to label the article. The labeling is not unique but the categorization applies multi-labeling techniques in order to detect the relation with each of the standard categories of the system. The summarization technique is based on heuristics. More specifically, we try, by utilizing language processing and facts that concern the keywords, to create a score for each of the sentences of the article. The more the score of a sentence, the more the probability of it to be included to the summary which consists of sentences of the text. The combination of the categorization and summarization provides the information that is shown to our web portal called perssonal. The personalization issue of the portal is based on the selections of the user, on the non-selections of the user, on the time that the user remains on an article, on the time that spends reading similar or identical articles. After a short period of time, the system is able to adopt on the user’s needs and is able to present articles that match the preferences of the user only.

Page generated in 0.0299 seconds