• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 389
  • 164
  • 129
  • 26
  • 21
  • 17
  • 14
  • 12
  • 10
  • 8
  • 7
  • 6
  • 5
  • 5
  • 4
  • Tagged with
  • 914
  • 914
  • 914
  • 277
  • 166
  • 157
  • 95
  • 89
  • 89
  • 82
  • 78
  • 69
  • 64
  • 61
  • 60
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
231

VersionsRank : escores de reputação de páginas web baseados na detecção de versões

Silva, Glauber Rodrigues da January 2009 (has links)
Os motores de busca utilizam o WebGraph formado pelas páginas e seus links para atribuir reputação às páginas Web. Essa reputação é utilizada para montar o ranking de resultados retornados ao usuário. No entanto, novas versões de páginas com uma boa reputação acabam por distribuir os votos de reputação entre todas as versões, trazendo prejuízo à página original e também as suas versões. O objetivo deste trabalho é especificar novos escores que considerem todas as versões de uma página Web para atribuir reputação para as mesmas. Para atingir esse objetivo, foram propostos quatro escores que utilizam a detecção de versões para atribuir uma reputação mais homogênea às páginas que são versões de um mesmo documento. Os quatro escores propostos podem ser classificados em duas categorias: os que realizam mudanças estruturais no WebGraph (VersionRank e VersionPageRank) e os que realizam operações aritméticas sobre os escores obtidos pelo algoritmo de PageRank (VersionSumRank e VersionAverageRank). Os experimentos demonstram que o VersionRank tem desempenho 26,55% superior ao PageRank para consultas navegacionais sobre a WBR03 em termos de MRR, e em termos de P@10, o VersionRank tem um ganho de 9,84% para consultas informacionais da WBR99. Já o escore VersionAverageRank, apresentou melhores resultados na métrica P@10 para consultas informacionais na WBR99 e WBR03. Na WBR99, os ganhos foram de 6,74% sobre o PageRank. Na WBR03, para consultas informacionais aleatórias o escore VersionAverageRank obteve um ganho de 35,29% em relação ao PageRank. / Search engines use WebGraph formed by the pages and their links to assign reputation to Web pages. This reputation is used for ranking show for the user. However, new versions of pages with a good reputation distribute your votes of reputation among all versions, damaging the reputation of original page and also their versions. The objective of this work is to specify the new scores to consider all versions of a Web page to assign reputation to them. To achieve this goal, four scores were proposed using the version detection to assign a more homogeneous reputation to the pages that are versions of the same document. The four scores proposed can be classified into two categories: those who perform structural changes in WebGraph (VersionRank and VersionPageRank) and those who performs arithmetic operations on the scores obtained by the PageRank algorithm (VersionSumRank and VersionAverageRank). The experiments show that the performance VersionRank is 26.55% higher than the PageRank for navigational queries on WBR03 in terms of MRR, and in terms of P@10, the VersionRank has a gain of 9.84% for the WBR99 informational queries. The score VersionAverageRank showed better results in the metric P@10 for WBR99 and WBR03 information queries. In WBR99, it had a gain of 6.74% compared to PageRank. In WBR03 for random informational queries, VersionAverageRank showed an increase of 35.29% compared to PageRank.
232

Chinese information access through internet on X-open system.

January 1997 (has links)
by Yao Jian. / Thesis (M.Phil.)--Chinese University of Hong Kong, 1997. / Includes bibliographical references (leaves 109-112). / Abstract --- p.i / Acknowledgments --- p.iii / Chapter 1 --- Introduction --- p.1 / Chapter 2 --- Basic Concepts And Related Work --- p.6 / Chapter 2.1 --- Codeset and Codeset Conversion --- p.7 / Chapter 2.2 --- HTML Language --- p.10 / Chapter 2.3 --- HTTP Protocol --- p.13 / Chapter 2.4 --- I18N And LION --- p.18 / Chapter 2.5 --- Proxy Server --- p.19 / Chapter 2.6 --- Related Work --- p.20 / Chapter 3 --- Design Principles And System Architecture --- p.23 / Chapter 3.1 --- Use of Existing Web System --- p.23 / Chapter 3.1.1 --- Protocol --- p.23 / Chapter 3.1.2 --- Avoid Duplication of Documents for Different Codesets --- p.25 / Chapter 3.1.3 --- Support On-line Codeset Conversion Facility --- p.27 / Chapter 3.1.4 --- Provide Internationalized Interface of Web Browser --- p.28 / Chapter 3.2 --- Our Approach --- p.29 / Chapter 3.2.1 --- Enhancing the Existing Browsers and Servers --- p.30 / Chapter 3.2.2 --- Incorporating Proxies in Our Scheme --- p.32 / Chapter 3.2.3 --- Automatic Codeset Conversion --- p.34 / Chapter 3.3 --- Overall System Architecture --- p.38 / Chapter 3.3.1 --- Architecture of Our Web System --- p.38 / Chapter 3.3.2 --- Flexibility of Our Design --- p.40 / Chapter 3.3.3 --- Which side do the codeset conversion? --- p.42 / Chapter 3.3.4 --- Caching --- p.42 / Chapter 4 --- Design Details of An Enhanced Server --- p.44 / Chapter 4.1 --- Architecture of The Enhanced Server --- p.44 / Chapter 4.2 --- Procedure on Processing Client's Request --- p.45 / Chapter 4.3 --- Modifications of The Enhanced Server --- p.48 / Chapter 4.3.1 --- Interpretation of Client's Codeset Announcement --- p.48 / Chapter 4.3.2 --- Codeset Identification of Web Documents on the Server --- p.49 / Chapter 4.3.3 --- Codeset Notification to the Web Client --- p.52 / Chapter 4.3.4 --- Codeset Conversion --- p.54 / Chapter 4.4 --- Experiment Results --- p.54 / Chapter 5 --- Design Details of An Enhanced Browser --- p.58 / Chapter 5.1 --- Architecture of The Enhanced Browser --- p.58 / Chapter 5.2 --- Procedure on Processing Users' Requests --- p.61 / Chapter 5.3 --- Event Management and Handling --- p.63 / Chapter 5.3.1 --- Basic Control Flow of the Browser --- p.63 / Chapter 5.3.2 --- Event Handlers --- p.64 / Chapter 5.4 --- Internationalization of Browser Interface --- p.75 / Chapter 5.4.1 --- Locale --- p.76 / Chapter 5.4.2 --- Resource File --- p.77 / Chapter 5.4.3 --- Message Catalog System --- p.79 / Chapter 5.5 --- Experiment Result --- p.85 / Chapter 6 --- Another Scheme - CGI --- p.89 / Chapter 6.1 --- Form and CGI --- p.90 / Chapter 6.2 --- CGI Control Flow --- p.96 / Chapter 6.3 --- Automatic Codeset Detection --- p.96 / Chapter 6.3.1 --- Analysis of code range for GB and Big5 --- p.98 / Chapter 6.3.2 --- Control Flow of Automatic Codeset Detection --- p.99 / Chapter 6.4 --- Experiment Results --- p.101 / Chapter 7 --- Conclusions and Future Work --- p.104 / Chapter 7.1 --- Current Status --- p.105 / Chapter 7.2 --- System Efficiency --- p.106 / Chapter 7.3 --- Future Work --- p.107 / Bibliography --- p.109 / Chapter A --- Programmer's Guide --- p.113 / Chapter A.1 --- Data Structure --- p.113 / Chapter A.2 --- Calling Sequence of Functions --- p.114 / Chapter A.3 --- Modification of Souce Code --- p.116 / Chapter A.4 --- Modification of Resources --- p.133 / Chapter B --- User Manual --- p.135
233

Modeling information-seeking expertise on the Web

Tabatabai, Diana January 2002 (has links)
No description available.
234

Web application development methodology and its supporting tools

Fan, Xin January 2003 (has links)
This thesis is devoted to a component model for Web application and the corresponding tools for its development. The model is described after the review of various existing Web application development methods and models. A case study is also provided to support the analysis. / thesis (PhD)--University of South Australia, 2003.
235

A Novel Concept and Context-Based Approach for Web Information Retrieval

Zakos, John, n/a January 2005 (has links)
Web information retrieval is a relatively new research area that has attracted a significant amount of interest from researchers around the world since the emergence of the World Wide Web in the early 1990s. The problems facing successful web information retrieval are a combination of challenges that stem from traditional information retrieval and challenges characterised by the nature of the World Wide Web. The goal of any information retrieval system is to provide an information need fulfilment in response to an information need. In a web setting, this means retrieving as many relevant web documents as possible in response to an inputted query that is typically limited to only containing a few terms expressive of the user's information need. This thesis is primarily concerned with firstly reviewing pertinent literature related to various aspects of web information retrieval research and secondly proposing and investigating a novel concept and context-based approach. The approach consists of techniques that can be used together or independently and aim to provide an improvement in retrieval accuracy over other approaches. A novel concept-based term weighting technique is proposed as a new method of deriving query term significance from ontologies that can be used for the weighting of inputted queries. A technique that dynamically determines the significance of terms occurring in documents based on the matching of contexts is also proposed. Other contributions of this research include techniques for the combination of document and query term weights for the ranking of retrieved documents. All techniques were implemented and tested on benchmark data. This provides a basis for performing comparison with previous top performing web information retrieval systems. High retrieval accuracy is reported as a result of utilising the proposed approach. This is supported through comprehensive experimental evidence and favourable comparisons against previously published results.
236

Cost-effective techniques for user-session-based testing of Web applications

Sampath, Sreedevi. January 2006 (has links)
Thesis (Ph.D.)--University of Delaware, 2006. / Principal faculty advisor: Lori L. Pollock, Dept. of Computer & Info Sciences. Includes bibliographical references.
237

Développement de l'interface web du logiciel T-REX (Tree and reticulogram reconstruction)

Younes, Adel 12 1900 (has links) (PDF)
Depuis son apparition, le logiciel T-Rex (tree and reticulogram recontruction) est un outil puissant pour la reconstruction et la visualisation d'arbres phylogénétiques et de réticulogrammes. Un réticulogramme est un réseau phylogénétique permettant de représenter les phénomènes d'évolution réticulée tels que l'hybridation, la recombinaison génétique et le transfert latéral de gènes. La reconstruction peut être faite à partir des matrices de distances complètes, des matrices de distances incomplètes et des séquences moléculaires. Dans le but de permettre aux biologistes et aux bioinformaticiens de bénéficier des différentes options, modes de calcul, ainsi que des diverses nouveautés de T-Rex, nous avons développé la version Web de ce logiciel. Dans ce mémoire, nous décrivons les différentes fonctions, méthodes et outils inclus dans T-Rex Web. Nous décrivons aussi les nouveaux programmes ajoutés à T-Rex Web, tels que ClustalW, Calcul de la distance topologique de Robinson et Foulds, et Species Taxonomy. La dernière option permet de générer une matrice de distances d'arbre et de reconstruire des arbres phylogénétiques à partir des listes des lignées des espèces données. La version Web du logiciel T-Rex est disponible à l'adresse URL suivante: www.trex.uqam.ca. ______________________________________________________________________________ MOTS-CLÉS DE L’AUTEUR : T-Rex, arbre phylogénétique, matrice de distance, réticulogramme, interface Web, taxonomie d'espèces.
238

Internet user access via dial-up and campus wireless networks-tracffic characterization and statistics

Hutchins, Ronald Roscoe January 2001 (has links)
No description available.
239

An "Interest" Index for WWW Servers and CyberRanking

YAMAMOTO, Shuichiro, MOTODA, Toshihiro, HATASHIMA, Takashi 20 April 2000 (has links)
No description available.
240

Content analysis of web sites from 2000 to 2004: a thematic meta-analysis

Zhang, Jian 01 November 2005 (has links)
The rise of the World Wide Web attracted concerns among social science scholars, especially those in the communication school who studied it by various methods like content analysis. However, the dynamic environment of the World Wide Web challenged this traditional research method, and, in turn, scholars tried to figure out valid solutions, which were summarized in the literature review section. After 2000, few studies focused on the content analysis of Web sites, while the World Wide Web developed rapidly and affected people??s everyday life. This study conducted a thematic meta-analysis to examine how researchers apply content analysis to the World Wide Web after 2000. A total of 39 studies that used content analysis to study Web sites were identified from three sources. Then data were collected and analyzed. This study found that, from 2000 to 2004, content analysis of the World Wide Web proliferated. The content analytical scholars had created new strategies to cope with challenges posed by the WWW. The suggestions made in this study forms some guidelines in the steps of content analysis research design, potentially aiding the future research of content analysis to Web sites in developing their own valid methods to study the rapid-paced WWW.

Page generated in 0.0715 seconds