• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 11
  • 4
  • 2
  • 1
  • 1
  • Tagged with
  • 23
  • 23
  • 23
  • 11
  • 11
  • 7
  • 7
  • 5
  • 3
  • 3
  • 3
  • 3
  • 2
  • 2
  • 2
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
11

Interactive HTML

Hackborn, Dianne 13 January 1997 (has links)
As the World Wide Web continues to grow, people clearly want to do much more with it than just publish static pages of text and graphics. While such increased interactivity has traditionally been accomplished through the use of server-side CGI scripts, much recent research on Web browsers has been on extending their capabilities through the addition of various types of client-side services. The most popular of these extensions take the form of plug-ins, applets, and "document scripts" such as Java Script. However, because these extensions have been created in a haphazard way by a variety of independent groups, they suffer greatly in terms of flexibility, uniformity, and interoperability. Interactive HTML is a system that addresses these problems by combining plug-ins, applets, and document scripts into one uniform and cohesive architecture. It is implemented as an external C library that can be used by a browser programmer to add client-side services to the browser. The IHTML services are implemented as dynamically loaded "language modules," allowing new plug-ins and language interpreters to be added to an iHTML browser without recompiling the browser itself. The system is currently integrated with NCSA's X Mosaic browser and includes language modules for a text viewer plug-in and Python language interpreter. This thesis examines the iHTML architecture in the context of the historical development of Web client-side services and presents an example of iHTML's use to collect usage information about Web documents. / Graduation date: 1997
12

A conceptual framework for web-based collaborative design

Gottfried, Shikha Ghosh 05 December 1996 (has links)
Although much effort has been invested to build applications that support group work, collaborative applications have not found easy success. The cost of adopting and maintaining collaborative applications has prevented their widespread use, especially among small distributed groups. Application developers have had difficulties recognizing the extra effort required by groups to use collaborative applications and how to either reduce this effort or provide other benefits to compensate for the extra work. These problems have limited the success of collaborative applications, which have not attained the same level of productivity improvements that single user applications have achieved. In this thesis we present a framework that describes the types of computer support that can facilitate the work of distributed engineering design groups. Our framework addresses support for web-based groups in particular because we believe the web can be a powerful medium for collaboration if accommodated properly. We show how the concepts in this framework can be implemented by prototyping a web-based engineering decision support system. Our framework is a synthesis of ideas motivated by an examination of literature in various fields that share a common interest in collaborative work. It can influence application development by helping developers become aware of the types of support should be considered to aid web-based collaborative design. / Graduation date: 1997
13

Suche und Orientierung im WWW : Verbesserung bisheriger Verfahren durch Einbindung hypertextspezifischer Informationen /

Bekavac, Bernard. January 1900 (has links)
Diss. Univ. Konstanz, 1999.
14

Network monitoring with focus on HTTP

Schmid, Andreas 01 May 1998 (has links)
Since its introduction in the early 1990s, the quick growth of the World Wide Web (WWW) traffic raises the question of whether past Local Area Network (LAN) packet traces still reflect the current situation or whether they have become obsolete. For this thesis, several LAN packet traces were obtained by monitoring the LAN of a typical academic environment. The tools for monitoring the network were a stand-alone HP LAN Protocol Analyzer as well as the free-ware software tool tcpdump. The main focus was placed on acquiring a low-level overview of the LAN traffic. Thus, it was possible to determine what protocols were mainly used and how the packet sizes were distributed. In particular, this study aimed at establishing the amount of WWW traffic on the LAN, and determining the MIME-Types of this traffic. The results indicate that in a typical academic environment, conventional sources of LAN traffic such as NFS are still predominant, whereas WWW traffic plays a rather marginal role. Furthermore, a large portion of the network packets contains little or no data at all, while another significant portion of the packets have sizes around the Maximum Transfer Unit (MTU). Consequently, research in the networking field has to direct its focus on issues beside the WWW. / Graduation date: 1998
15

Attribute Exploration on the Web

Jäschke, Robert, Rudolph, Sebastian 28 May 2013 (has links) (PDF)
We propose an approach for supporting attribute exploration by web information retrieval, in particular by posing appropriate queries to search engines, crowd sourcing systems, and the linked open data cloud. We discuss underlying general assumptions for this to work and the degree to which these can be taken for granted.
16

Syntactic and Semantic Analysis and Visualization of Unstructured English Texts

Karmakar, Saurav 14 December 2011 (has links)
People have complex thoughts, and they often express their thoughts with complex sentences using natural languages. This complexity may facilitate efficient communications among the audience with the same knowledge base. But on the other hand, for a different or new audience this composition becomes cumbersome to understand and analyze. Analysis of such compositions using syntactic or semantic measures is a challenging job and defines the base step for natural language processing. In this dissertation I explore and propose a number of new techniques to analyze and visualize the syntactic and semantic patterns of unstructured English texts. The syntactic analysis is done through a proposed visualization technique which categorizes and compares different English compositions based on their different reading complexity metrics. For the semantic analysis I use Latent Semantic Analysis (LSA) to analyze the hidden patterns in complex compositions. I have used this technique to analyze comments from a social visualization web site for detecting the irrelevant ones (e.g., spam). The patterns of collaborations are also studied through statistical analysis. Word sense disambiguation is used to figure out the correct sense of a word in a sentence or composition. Using textual similarity measure, based on the different word similarity measures and word sense disambiguation on collaborative text snippets from social collaborative environment, reveals a direction to untie the knots of complex hidden patterns of collaboration.
17

Using the informational processing paradigm to design commercial rumour response strategies on the World Wide Web

Howell, Gwyneth Veronica James January 2006 (has links)
[Truncated abstract] Rumours can lead to unpredictable events: the manner in which an organisation responds to a commercial rumour can alter its reputation, and can affect its profitability as well as, ultimately, its survival. Commercial rumours are now a prominent feature of the business environment. They can emerge from organisational change, pending workforce layoffs, mergers, and changes to management, in addition, commercial rumours can lower morale and undermine productivity. There are several well-known examples of commercial rumours that have been, or continue to be, circulated. Commercial rumours are typically either about a conspiracy or contamination issue. Conspiracy rumours usually target those organisational practices or policies which are identified as undesirable by the stakeholders. This form of rumour is often precipitated by situations where people do not have all the information about a situation, for example the rumour about Proctor & Gamble being run by the Moonies. Snapple, the soft drink company, was rumoured in 1992 to be supporting the Ku Klux Klan in closing abortion clinics. Contamination rumours are wide-ranging and typically have revulsion theme, such as McDonald’s "worms in the burger", Pop Rock’s candies which exploded in the stomach, and poison in Herron’s paracetamol . . . Marketers suggest that web sites Commerical Rumour Responses on the Web represent the future of marketing communications on the Internet. The key implication of this study for organisations is when faced with a negative rumour, specific and selected Web pages can be used manage company’s stakeholders recall the rumour and organisational stakeholders can be persuaded by the company’s rumour response strategies.
18

Incorporação de qualidade de serviço no modelo de serviços Web / Inclusion of quality of service into the Web service model

Garcia, Diego Zuquim Guimarães, 1982- 03 May 2007 (has links)
Orientador: Maria Beatriz Felgar de Toledo / Dissertação (mestrado) - Universidade Estadual de Campinas, Instituto de Computação / Made available in DSpace on 2018-08-08T13:37:39Z (GMT). No. of bitstreams: 1 Garcia_DiegoZuquimGuimaraes_M.pdf: 1006389 bytes, checksum: 8016f0659ec60b1ef54c6aaf41fba177 (MD5) Previous issue date: 2007 / Resumo: A tecnologia de serviços Web possui algumas propriedades importantes para o desenvolvimento e a execução de aplicações distribuídas. Entretanto, ela ainda não oferece apoio para tratar as características não-funcionais dos serviços. Os consumidores de serviços Web podem requerer serviços com parâmetros de qualidade específicos e esperar garantias de níveis de qualidade. O objetivo desta dissertação é estender o modelo de serviços Web para apoiar a gerência de características não-funcionais para serviços Web. O modelo proposto inclui mediadores para auxiliar na descoberta de serviços de acordo com os requisitos funcionais e não-funcionais dos consumidores e monitores para verificar os atributos de qualidade. As principais contribuições desta dissertação são: a utilização do padrão Web Services Policy Framework (WS-Policy) para complementar as descrições de serviços Web Services Description Language (WSDL) com políticas para atributos de qualidade; uma extensão para o padrão Universal Description Discovery & Integration (UDDI) para a publicação e a descoberta de serviços Web incluindo características nãofuncionais; e o monitoramento e a atualização de características não-funcionais para refletir os atributos reais dos serviços / Abstract: Although the Web service technology allows the development and execution of distributed applications, it still lacks facilities to deal with Quality of Service (QoS). Consumers may require services with particular non-functional characteristics and expect quality level guarantees. The goal of this thesis is to propose an extended Web service architecture supporting QoS management for Web services. It includes brokers to facilitate service selection according to functional and non-functional requirements and monitors to verify QoS attributes. The main contributions of this approach are: the use of the Web Services Policy Framework (WS-Policy) standard to complement Web Services Description Language (WSDL) specifications with QoS policies; an extension to the Universal Description Discovery & Integration (UDDI) standard for QoS-enriched Web service publication and discovery; and QoS updating to reflect actual service attributes / Mestrado / Mestre em Ciência da Computação
19

Attribute Exploration on the Web

Jäschke, Robert, Rudolph, Sebastian 28 May 2013 (has links)
We propose an approach for supporting attribute exploration by web information retrieval, in particular by posing appropriate queries to search engines, crowd sourcing systems, and the linked open data cloud. We discuss underlying general assumptions for this to work and the degree to which these can be taken for granted.
20

Collecte orientée sur le Web pour la recherche d’information spécialisée / Focused document gathering on the Web for domain-specific information retrieval

De Groc, Clément 05 June 2013 (has links)
Les moteurs de recherche verticaux, qui se concentrent sur des segments spécifiques du Web, deviennent aujourd'hui de plus en plus présents dans le paysage d'Internet. Les moteurs de recherche thématiques, notamment, peuvent obtenir de très bonnes performances en limitant le corpus indexé à un thème connu. Les ambiguïtés de la langue sont alors d'autant plus contrôlables que le domaine est bien ciblé. De plus, la connaissance des objets et de leurs propriétés rend possible le développement de techniques d'analyse spécifiques afin d'extraire des informations pertinentes.Dans le cadre de cette thèse, nous nous intéressons plus précisément à la procédure de collecte de documents thématiques à partir du Web pour alimenter un moteur de recherche thématique. La procédure de collecte peut être réalisée en s'appuyant sur un moteur de recherche généraliste existant (recherche orientée) ou en parcourant les hyperliens entre les pages Web (exploration orientée).Nous étudions tout d'abord la recherche orientée. Dans ce contexte, l'approche classique consiste à combiner des mot-clés du domaine d'intérêt, à les soumettre à un moteur de recherche et à télécharger les meilleurs résultats retournés par ce dernier.Après avoir évalué empiriquement cette approche sur 340 thèmes issus de l'OpenDirectory, nous proposons de l'améliorer en deux points. En amont du moteur de recherche, nous proposons de formuler des requêtes thématiques plus pertinentes pour le thème afin d'augmenter la précision de la collecte. Nous définissons une métrique fondée sur un graphe de cooccurrences et un algorithme de marche aléatoire, dans le but de prédire la pertinence d'une requête thématique. En aval du moteur de recherche, nous proposons de filtrer les documents téléchargés afin d'améliorer la qualité du corpus produit. Pour ce faire, nous modélisons la procédure de collecte sous la forme d'un graphe triparti et appliquons un algorithme de marche aléatoire biaisé afin d'ordonner par pertinence les documents et termes apparaissant dans ces derniers.Dans la seconde partie de cette thèse, nous nous focalisons sur l'exploration orientée du Web. Au coeur de tout robot d'exploration orientée se trouve une stratégie de crawl qui lui permet de maximiser le rapatriement de pages pertinentes pour un thème, tout en minimisant le nombre de pages visitées qui ne sont pas en rapport avec le thème. En pratique, cette stratégie définit l'ordre de visite des pages. Nous proposons d'apprendre automatiquement une fonction d'ordonnancement indépendante du thème à partir de données existantes annotées automatiquement. / Vertical search engines, which focus on a specific segment of the Web, become more and more present in the Internet landscape. Topical search engines, notably, can obtain a significant performance boost by limiting their index on a specific topic. By doing so, language ambiguities are reduced, and both the algorithms and the user interface can take advantage of domain knowledge, such as domain objects or characteristics, to satisfy user information needs.In this thesis, we tackle the first inevitable step of a all topical search engine : focused document gathering from the Web. A thorough study of the state of art leads us to consider two strategies to gather topical documents from the Web: either relying on an existing search engine index (focused search) or directly crawling the Web (focused crawling).The first part of our research has been dedicated to focused search. In this context, a standard approach consists in combining domain-specific terms into queries, submitting those queries to a search engine and down- loading top ranked documents. After empirically evaluating this approach over 340 topics, we propose to enhance it in two different ways: Upstream of the search engine, we aim at formulating more relevant queries in or- der to increase the precision of the top retrieved documents. To do so, we define a metric based on a co-occurrence graph and a random walk algorithm, which aims at predicting the topical relevance of a query. Downstream of the search engine, we filter the retrieved documents in order to improve the document collection quality. We do so by modeling our gathering process as a tripartite graph and applying a random walk with restart algorithm so as to simultaneously order by relevance the documents and terms appearing in our corpus.In the second part of this thesis, we turn to focused crawling. We describe our focused crawler implementation that was designed to scale horizontally. Then, we consider the problem of crawl frontier ordering, which is at the very heart of a focused crawler. Such ordering strategy allows the crawler to prioritize its fetches, maximizing the number of in-domain documents retrieved while minimizing the non relevant ones. We propose to apply learning to rank algorithms to efficiently order the crawl frontier, and define a method to learn a ranking function from existing crawls.

Page generated in 0.0967 seconds