• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 1
  • 1
  • Tagged with
  • 3
  • 3
  • 3
  • 2
  • 2
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
1

WebDoc an Automated Web Document Indexing System

Tang, Bo 13 December 2002 (has links)
This thesis describes WebDoc, an automated system that classifies Web documents according to the Library of Congress classification system. This work is an extension of an early version of the system that successfully generated indexes for journal articles. The unique features of Web documents, as well as how they will affect the design of a classification system, are discussed. We argue that full-text analysis of Web documents is inevitable, and contextual information must be used to assist the classification. The architecture of the WebDoc system is presented. We performed experiments on it with and without the assistance of contextual information. The results show that contextual information improved the system?s performance significantly.
2

Indexation et interrogation de pages web décomposées en blocs visuels

Faessel, Nicolas 14 June 2011 (has links)
Cette thèse porte sur l'indexation et l'interrogation de pages Web. Dans ce cadre, nous proposons un nouveau modèle : BlockWeb, qui s'appuie sur une décomposition de pages Web en une hiérarchie de blocs visuels. Ce modèle prend en compte, l'importance visuelle de chaque bloc et la perméabilité des blocs au contenu de leurs blocs voisins dans la page. Les avantages de cette décomposition sont multiples en terme d'indexation et d'interrogation. Elle permet notamment d'effectuer une interrogation à une granularité plus fine que la page : les blocs les plus similaires à une requête peuvent être renvoyés à la place de la page complète. Une page est représentée sous forme d'un graphe acyclique orienté dont chaque nœud est associé à un bloc et étiqueté par l'importance de ce bloc et chaque arc est étiqueté la perméabilité du bloc cible au bloc source. Afin de construire ce graphe à partir de la représentation en arbre de blocs d'une page, nous proposons un nouveau langage : XIML (acronyme de XML Indexing Management Language), qui est un langage de règles à la façon de XSLT. Nous avons expérimenté notre modèle sur deux applications distinctes : la recherche du meilleur point d'entrée sur un corpus d'articles de journaux électroniques et l'indexation et la recherche d'images sur un corpus de la campagne d'ImagEval 2006. Nous en présentons les résultats. / This thesis is about indexing and querying Web pages. We propose a new model called BlockWeb, based on the decomposition of Web pages into a hierarchy of visual blocks. This model takes in account the visual importance of each block as well as the permeability of block's content to their neighbor blocks on the page. Splitting up a page into blocks has several advantages in terms of indexing and querying. It allows to query the system with a finer granularity than the whole page: the most similar blocks to the query can be returned instead of the whole page. A page is modeled as a directed acyclic graph, the IP graph, where each node is associated with a block and is labeled by the coefficient of importance of this block and each arc is labeled by the coefficient of permeability of the target node content to the source node content. In order to build this graph from the bloc tree representation of a page, we propose a new language : XIML (acronym for XML Indexing Management Language), a rule based language like XSLT. The model has been assessed on two distinct dataset: finding the best entry point in a dataset of electronic newspaper articles, and images indexing and querying in a dataset drawn from web pages of the ImagEval 2006 campaign. We present the results of these experiments.
3

Criação de vetores temáticos de domínios para a desambiguação polissêmica de termos. / Creation of thematic vectors of domains for the polysemic disambiguation of terms.

BISPO, Magna Celi Tavares. 01 August 2018 (has links)
Submitted by Johnny Rodrigues (johnnyrodrigues@ufcg.edu.br) on 2018-08-01T17:34:31Z No. of bitstreams: 1 MAGNA CELI TAVARES BISPO - DISSERTAÇÃO PPGCC 2012..pdf: 13590339 bytes, checksum: 3903bd3ab6c0c474a6a7e9bf8b04e08a (MD5) / Made available in DSpace on 2018-08-01T17:34:31Z (GMT). No. of bitstreams: 1 MAGNA CELI TAVARES BISPO - DISSERTAÇÃO PPGCC 2012..pdf: 13590339 bytes, checksum: 3903bd3ab6c0c474a6a7e9bf8b04e08a (MD5) Previous issue date: 2012-11-30 / A ambiguidade de termos é um dos fatores que dificulta o processo de indexação de documentos e recuperação de informação desejada por um usuário. O presente trabalho se baseia na hipótese de que parte deste problema pode ser minimizado sabendo-se de antemão o domínio do documento que contém termos ambíguos. Para determinar este domínio foram construídos vocabulários temáticos por meio da extração de termos de documentos de domínios de conhecimento pré-determinados, com o uso de regras sintáticas. A Wikipédia foi usada como base de consulta, por ser uma enciclopédia digital contendo as categorias definidas semelhantes à Classificação Decimal Universal (CDU), e cada categoria com uma vasta quantidade de documentos específicos, sendo essa característica fundamental para formação de um vocabulário específico do domínio de um conhecimento. A escolha das categorias foi baseada na CDU, composta de 10 domínios e seus respectivos subdomínios. Os vocabulários obtidos, denominados de Vetores Temáticos de Domínio (VTD), serviram de base para a classificação de novos documentos. Para validação dos VTD's, foram realizados três tipos de experimentos diferentes, o primeiro foi classificar novos documentos utilizando o método vetorial, tendo o VTD como base de consulta. O segundo experimento foi uma classificação utilizando outro classificador, o Intellexer Categorizer, e o terceiro experimento, criou-se um vetor de termos através do Weka, o qual foi submetido a servir de base de consulta para classificar novos documentos, utilizando o modelo vetorial. Os resultados foram satisfatórios, pois mostrou que o VTD obteve uma melhor classificação em relação aos outros métodos, dos 14 novos documentos, classificou 10 corretamente e 4 errados, apresentando uma acurácia de 80%, contra a acurácia de 57% do Intellexer Categorizer e de 50% da classificação utilizando o vetor de termos criado pelo Weka. / Terms ambiguity is one of the factors that hinders the document indexation and information retrieval processes desired by a user. This work is based on the hypothesis that part of this problem can be minimized by knowing beforehand the field of the document that contains ambiguous terms. To determine this domain, typical vocabularies were created through the extraction of terms from documents of predetermined knowledge domains, with the use of syntactical rules. Wikipedia was used as a consultation base because it is a digital encyclopedia that contains the categories defined similar to the Universal Decimal Classification (UDC), each category containing a vast amount of specific documents, being this feature essential for the formation of a domain-specific vocabulary. The choice of the categories was based on the UDC, composed of 10 domains and their respective subdomains. The vocabularies obtained, denominated as Thematic Domain Vectors (TDV), served as the basis for the classification of new documents. For the validation of the TDVs, three different types of experiments were performed: the first was to classify new documents using the vectorial method, with the TDV as a basis of consultation. The second experiment was a classification using another classifier, the Intellexer Categorizer. For the third experiment was created a vector of terms through Weka, which was submitted to serve as a a consultation base to classify new documents using the vectorial model. The results were satisfactory, because they showed that the TDV obtained a better classification relative to other methods. Of the 14 new documents, properly it rated 10 and 4 incorrectly, with an accuracy of 80%, against 57% accuracy of the Intellexer Categorizer program and 50% of the classification using the Weka created vector of terms.

Page generated in 0.0698 seconds