• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 327
  • 58
  • 46
  • 35
  • 21
  • 9
  • 9
  • 8
  • 7
  • 6
  • 4
  • 4
  • 4
  • 3
  • 3
  • Tagged with
  • 634
  • 66
  • 65
  • 54
  • 54
  • 49
  • 47
  • 45
  • 41
  • 35
  • 35
  • 34
  • 33
  • 33
  • 32
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
31

Effective web crawlers

Ali, Halil, hali@cs.rmit.edu.au January 2008 (has links)
Web crawlers are the component of a search engine that must traverse the Web, gathering documents in a local repository for indexing by a search engine so that they can be ranked by their relevance to user queries. Whenever data is replicated in an autonomously updated environment, there are issues with maintaining up-to-date copies of documents. When documents are retrieved by a crawler and have subsequently been altered on the Web, the effect is an inconsistency in user search results. While the impact depends on the type and volume of change, many existing algorithms do not take the degree of change into consideration, instead using simple measures that consider any change as significant. Furthermore, many crawler evaluation metrics do not consider index freshness or the amount of impact that crawling algorithms have on user results. Most of the existing work makes assumptions about the change rate of documents on the Web, or relies on the availability of a long history of change. Our work investigates approaches to improving index consistency: detecting meaningful change, measuring the impact of a crawl on collection freshness from a user perspective, developing a framework for evaluating crawler performance, determining the effectiveness of stateless crawl ordering schemes, and proposing and evaluating the effectiveness of a dynamic crawl approach. Our work is concerned specifically with cases where there is little or no past change statistics with which predictions can be made. Our work analyses different measures of change and introduces a novel approach to measuring the impact of recrawl schemes on search engine users. Our schemes detect important changes that affect user results. Other well-known and widely used schemes have to retrieve around twice the data to achieve the same effectiveness as our schemes. Furthermore, while many studies have assumed that the Web changes according to a model, our experimental results are based on real web documents. We analyse various stateless crawl ordering schemes that have no past change statistics with which to predict which documents will change, none of which, to our knowledge, has been tested to determine effectiveness in crawling changed documents. We empirically show that the effectiveness of these schemes depends on the topology and dynamics of the domain crawled and that no one static crawl ordering scheme can effectively maintain freshness, motivating our work on dynamic approaches. We present our novel approach to maintaining freshness, which uses the anchor text linking documents to determine the likelihood of a document changing, based on statistics gathered during the current crawl. We show that this scheme is highly effective when combined with existing stateless schemes. When we combine our scheme with PageRank, our approach allows the crawler to improve both freshness and quality of a collection. Our scheme improves freshness regardless of which stateless scheme it is used in conjunction with, since it uses both positive and negative reinforcement to determine which document to retrieve. Finally, we present the design and implementation of Lara, our own distributed crawler, which we used to develop our testbed.
32

Personligheter hos mjölkkor

Johansson, Lena January 2010 (has links)
No description available.
33

Merging and Consistency Checking of Distributed Models

Sabetzadeh, Mehrdad 26 February 2009 (has links)
Large software projects are characterized by distributed environments consisting of teams at different organizations and geographical locations. These teams typically build multiple overlapping models, representing different perspectives, different versions across time, different variants in a product family, different development concerns, etc. Keeping track of the relationships between these models, constructing a global view, and managing consistency are major challenges. Model Management is concerned with describing the relationships between distributed models, i.e., models built in a distributed development environment, and providing systematic operators to manipulate these models and their relationships. Such operators include, among others, Match, for finding relationships between disparate models, Merge, for combining models with respect to known or hypothesized relationships between them, Slice, for producing projections of models and relationships based on given criteria, and Check-Consistency, for verifying models and relationships against the consistency properties of interest. In this thesis, we provide automated solutions for two key model management operators, Merge and Check-Consistency. The most novel aspects of our work on model merging are (1) the ability to combine arbitrarily large collections of interrelated models and (2) support for toleration of incompleteness and inconsistency. Our consistency checking technique employs model merging to reduce the problem of checking inter-model consistency to checking intra-model consistency of a merged model. This enables a flexible way of verifying global consistency properties that is not possible with other existing approaches. We develop a prototype tool, TReMer+, implementing our merge and consistency checking approaches. We use TReMer+ to demonstrate that our contributions facilitate understanding and refinement of the relationships between distributed models.
34

Merging and Consistency Checking of Distributed Models

Sabetzadeh, Mehrdad 26 February 2009 (has links)
Large software projects are characterized by distributed environments consisting of teams at different organizations and geographical locations. These teams typically build multiple overlapping models, representing different perspectives, different versions across time, different variants in a product family, different development concerns, etc. Keeping track of the relationships between these models, constructing a global view, and managing consistency are major challenges. Model Management is concerned with describing the relationships between distributed models, i.e., models built in a distributed development environment, and providing systematic operators to manipulate these models and their relationships. Such operators include, among others, Match, for finding relationships between disparate models, Merge, for combining models with respect to known or hypothesized relationships between them, Slice, for producing projections of models and relationships based on given criteria, and Check-Consistency, for verifying models and relationships against the consistency properties of interest. In this thesis, we provide automated solutions for two key model management operators, Merge and Check-Consistency. The most novel aspects of our work on model merging are (1) the ability to combine arbitrarily large collections of interrelated models and (2) support for toleration of incompleteness and inconsistency. Our consistency checking technique employs model merging to reduce the problem of checking inter-model consistency to checking intra-model consistency of a merged model. This enables a flexible way of verifying global consistency properties that is not possible with other existing approaches. We develop a prototype tool, TReMer+, implementing our merge and consistency checking approaches. We use TReMer+ to demonstrate that our contributions facilitate understanding and refinement of the relationships between distributed models.
35

Managing Cache Consistency to Scale Dynamic Web Systems

Wasik, Chris January 2007 (has links)
Data caching is a technique that can be used by web servers to speed up the response time of client requests. Dynamic websites are becoming more popular, but they pose a problem –- it is difficult to cache dynamic content, as each user may receive a different version of a webpage. Caching fragments of content in a distributed way solves this problem, but poses a maintainability challenge: cached fragments may depend on other cached fragments, or on underlying information in a database. When the underlying information is updated, care must be taken to ensure cached information is also invalidated. If new code is added that updates the database, the cache can very easily become inconsistent with the underlying data. The deploy-time dependency analysis method solves this maintainability problem by analyzing web application source code at deploy-time, and statically writing cache dependency information into the deployed application. This allows for the significant performance gains distributed object caching can allow, without any of the maintainability problems that such caching creates.
36

Managing Cache Consistency to Scale Dynamic Web Systems

Wasik, Chris January 2007 (has links)
Data caching is a technique that can be used by web servers to speed up the response time of client requests. Dynamic websites are becoming more popular, but they pose a problem –- it is difficult to cache dynamic content, as each user may receive a different version of a webpage. Caching fragments of content in a distributed way solves this problem, but poses a maintainability challenge: cached fragments may depend on other cached fragments, or on underlying information in a database. When the underlying information is updated, care must be taken to ensure cached information is also invalidated. If new code is added that updates the database, the cache can very easily become inconsistent with the underlying data. The deploy-time dependency analysis method solves this maintainability problem by analyzing web application source code at deploy-time, and statically writing cache dependency information into the deployed application. This allows for the significant performance gains distributed object caching can allow, without any of the maintainability problems that such caching creates.
37

An investigation of the fiber consistency distributions in turbulent tube flow.

Sanders, H. T. (Harry Thomas) 01 January 1970 (has links)
No description available.
38

A study of the factors influencing the chlorination of Mitscherlich sulphite pulp.

Voigtman, Edward H. 06 1900 (has links)
No description available.
39

Design and Analysis of a Highly Efficient File Server Group

Liu, Feng-jung 29 January 2005 (has links)
The IT community has increasingly come to view storage as a resource that should be shared among computer systems and managed independently of the computer systems that it serves. And, the explosive growth of the Web contents has led to increasing attention on two major challenges: scalability and high availability of network file system. Therefore, the ways to improve the reliability and availability of system, to achieve the expected reduction in operational expenses and to reduce the operations of system management of system have become essential issues. A basic technique for improving reliability of a file system is to mask the effects of failures through replication. Consistency control protocols are implemented to ensure the consistency among these replicas. In this dissertation, we leveraged the concept of intermediate file handle to cover the heterogeneity of file system. But, the monolithic server system suffered from the poor system utilization due to the lack of dependence checking between writes and management of out-of-ordered requests. Hence, in this dissertation, we followed the concept of intermediate file handle and proposed an efficient data consistency control scheme, which attempts to eliminate unnecessary waits for independent NFS writes to improve the efficiency of file server group. In addition, we also proposed a simple load-sharing mechanism for NFS client to improve system throughput and the utilization of duplicates. Finally, the results of experiments proved the efficiency of the proposed consistency control mechanism and load-sharing policy. Above all, easy to implement is our main design consideration.
40

Differential reinforcement of fixed-interval interresponse times effects on choice /

Wade, Tammy R. January 2002 (has links)
Thesis (M.A.)--West Virginia University, 2002. / Title from document title page. Document formatted into pages; contains vii, 30 p. : ill. Includes abstract. Includes bibliographical references (p. 30).

Page generated in 0.1054 seconds