• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 85
  • 45
  • 34
  • 30
  • 30
  • 18
  • 8
  • 7
  • 6
  • 6
  • 3
  • 2
  • 2
  • 2
  • 1
  • Tagged with
  • 288
  • 116
  • 88
  • 70
  • 69
  • 67
  • 53
  • 39
  • 34
  • 34
  • 34
  • 29
  • 28
  • 27
  • 23
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
11

エンドユーザーのWeb探索行動

種市, 淳子, Taneichi, Junko, 逸村, 裕, Itsumura, Hiroshi January 2006 (has links)
No description available.
12

Audit webových stránek prodejců výpočetní techniky / Audit of the electronics retailers´ websites

Růžek, Pavel January 2009 (has links)
The main objective of this diploma thesis is to analyze websites of electronics retailers. An initial analysis is followed by evaluation of the determined results and designing the right solutions. The diploma thesis is divided into a theoretical and a practical section. The theoretical section is focused on consumer behavior and the aspects of well-developed websites, which are visibility, usability, accessibility, technical and graphic design. These elements are the basis for the compilation of methodology, which is then used for analyzing websites. The practical section includes a detailed audit of five selected websites. It also contains user testing by randomly selected respondents. The detailed analysis is followed by a summarization of the results of individual companies. The output of the thesis is an overview of the quality of tested companies' websites. The main contribution of this thesis is an overall analysis and concrete design improvement of the electronics retailers' websites.
13

The guiding process in discovery hypertext learning environments for the Internet

Pang, Kingsley King Wai January 1998 (has links)
Hypertext is the dominant method to navigate the Internet, providing user freedom and control over navigational behaviour. There has been an increase in converting existing educational material into Internet web pages but weaknesses have been identified in current WWW learning systems. There is a lack of conceptual support for learning from hypertext, navigational disorientation and cognitive overload. This implies the need for an established pedagogical approach to developing the web as a teaching and learning medium. Guided Discovery Learning is proposed as an educational pedagogy suitable for supporting WWW learning. The hypothesis is that a guided discovery environment will produce greater gains in learning and satisfaction, than a non-adaptive hypertext environment. A second hypothesis is that combining concept maps with this specific educational paradigm will provide cognitive support. The third hypothesis is that student learning styles will not influence learning outcome or user satisfaction. Thus, providing evidence that the guided discovery learning paradigm can be used for many types of learning styles. This was investigated by the building of a guided discovery system and a framework devised for assessing teaching styles. The system provided varying discovery steps, guided advice, individualistic system instruction and navigational control. An 84 subject experiment compared a Guided discovery condition, a Map-only condition and an Unguided condition. Subjects were subdivided according to learning styles, with measures for learning outcome and user satisfaction. The results indicate that providing guidance will result in a significant increase in level of learning. Guided discovery condition subjects, regardless of learning styles, experienced levels of satisfaction comparable to those in the other conditions. The concept mapping tool did not appear to affect learning outcome or user satisfaction. The conclusion was that using a particular approach to guidance would result in a more supportive environment for learning. This research contributes to the need for a better understanding of the pedagogic design that should be incorporated into WWW learning environments, with a recommendation for a guided discovery approach to alleviate major hypertext and WWW issues for distance learning.
14

Intelligent information retrieval from the World Wide Web using fuzzy user modelling

Mooney, Gabrielle Joanne January 1999 (has links)
This thesis investigates the application. of fuzzy logic techniques and user modelling to the process of information retrieval (IR) from the World Wide Web (WWW). The research issue is whether this process can be improved through such an application. The exponential rise of information itself as an invaluable global commodity, coupled with .acceierating development in. computing and telecommunications, and boosted by networked information sources such as the WWW, has led to the development of tools, such as search engines, to facilitate information search and retrieval. However, despite their sophistication, they are unable effectively to. address users' information. needs. Also, as the-WWW can be seen as a dynamic, continuously changing global information corpus, these tools suffer from the problems of irrelevancy and redundancy. Therefore, in order to overcome these problems and remain effective, IR systems need to become 'intelligent' in some way. It is from this premise that the focus of this research has developed. Initially, theoretical and investigative research into the areas ofIR from electronic sources and the nature of the Internet (including the WWW) revealed that highly sophisticated systems are being developed and there is a drive towards the integration of, for example, electronic libraries, COROM networks, and the WWW. Research into intelligent IR, the use of AI techniques to improve the IR process, informed an evaluation of various approaches. This revealed that a munber of techniques, for example, expert systems, neural networks and semantic networks, have been employed, with limited success. Owing to the nature of the WWW, though, many of the previous AI approaches are inapplicable as they rely too much on extensive knowledge of the retrieval corpus. However, the evaluation suggested that fuzzy logic, with its inherent ability to capture partial knowledge within fuzzy sets, is a valid approach. User modelling research indicated that adaptive user stereotypes are a fruitful way to represent different types of user and their information need. Here, these stereotypes are represented as fuzzy sets, ensuring flexibility and adaptivity. The goal of the reported research. then, was not to. develop an 'intelligent agent' but to apply fuzzy logic techniques and user modelling to the process of user query formulation, in order to test the research issue. This issue was whether the application of these techniques could improve the IR process. A prototype system, the Fuzzy Modelling Query Assistant (FMQA), was developed that attempts intelligently to assist the user in capturing their information need. The concept was to refine the user's query before submitting it to an existing search engine, in order to improve upon the IR results of using the search tool alone. To address the research issue, a user study of the FMQA was performed. The design and conduct is reported in depth. The study results were analysed and the findings are given. The results indicate that,. for certain types of user especially, the FMQA does provide improvement in the IR process, in terms of the results. There is a critical review of the research aims in the light of the results, conclusions are drawn and recommendations for future research given.
15

Comunidades mediadas pela Internet : uma pesquisa multimétodos para estruturação de base conceitual e projeto de web sites

Bellini, Carlo Gabriel Porto January 2001 (has links)
O objeto de estudo da presente pesquisa são as comunidades mediadas pela Internet (CMIs). Uma CMI consiste de um conjunto de pessoas que compartilham interesses e que, durante algum tempo, utilizam recursos em comum na Internet (por exemplo, um web site – objeto preferencial deste trabalho) para trocarem informações umas com as outras relativamente aos interesses compartilhados. A pesquisa realizada é exploratória e qualitativa, tendo feito uso de estudos de caso, pesquisa-ação e entrevistas em profundidade para estruturar uma base conceitual para as CMIs e reunir elementos relevantes a serem considerados quando da construção de web sites para as mesmas. Realizou-se estudo de caso de 5 (cinco) web sites de CMIs, a fim de serem identificadas as principais tecnologias e métodos em uso atualmente para a estruturação de web sites para CMIs. Na pesquisa-ação, 7 (sete) grupos de pessoas foram identificados e, para cada um, construiu-se 1 (um) web site, de modo que se ofereceu um espaço na Internet para a interação dos seus integrantes. A observação da interação das pessoas através dos web sites permitiu concluir-se que, dos sete grupos iniciais, apenas 1 (um) poderia ser caracterizado como CMI, conforme critérios de Jones (1997): associação sustentável, variedade de comunicadores, espaço virtual para a comunicação em grupo, e interatividade. Para as entrevistas em profundidade, elaborou-se um questionário com base no referencial teórico, nos estudos de caso e na pesquisa-ação, sendo aplicado a 17 (dezessete) pessoas (da única CMI e de dois dos sete grupos). O objetivo das entrevistas foi levantarem-se percepções sobre os web sites utilizados pelos grupos, percepções essas que, sob análise de conteúdo, ajudaram na formação de um conjunto de 12 (doze) recomendações para a construção de web sites para CMIs. As recomendações são de natureza diversa, mas deixam clara a necessidade de haver um entendimento profundo do contexto de uma CMI previamente ao projeto do seu web site.
16

Autonomous Cooperating Web Crawlers

McLearn, Greg January 2002 (has links)
A web crawler provides an automated way to discover web events ? creation, deletion, or updates of web pages. Competition among web crawlers results in redundant crawling, wasted resources, and less-than-timely discovery of such events. This thesis presents a cooperative sharing crawler algorithm and sharing protocol. Without resorting to altruistic practices, competing (yet cooperative) web crawlers can mutually share discovered web events with one another to maintain a more accurate representation of the web than is currently achieved by traditional polling crawlers. The choice to share or merge is entirely up to an individual crawler: sharing is the act of allowing a crawler M to access another crawler's web-event data (call this crawler S), and merging occurs when crawler M requests web-event data from crawler S. Crawlers can choose to share with competing crawlers if it can help reduce contention between peers for resources associated with the act of crawling. Crawlers can choose to merge from competing peers if it helps them to maintain a more accurate representation of the web at less cost than directly polling web pages. Crawlers can control how often they choose to merge through the use of a parameter &#961;, which dictates the percentage of time spent either polling or merging with a peer. Depending on certain conditions, pathological behaviour can arise if polling or merging is the only form of data collection. Simulations of communities of simple cooperating web crawlers successfully show that a combination of polling and merging (0 < &#961; < 1) can allow an individual member of the cooperating community a higher degree of accuracy in their representation of the web as compared to a traditional polling crawler. Furthermore, if web crawlers are allowed to evaluate their own performance, they can dynamically switch between periods of polling and merging to still perform better than traditional crawlers. The mutual performance gain increases as more crawlers are added to the community.
17

Ungdomars tankar kring betydelsen av färg och form vid framställningen av en internationell webbtidning

Back, Jon, Cross-Rosell, Sandy January 2001 (has links)
När en grafisk profil för en spännande ny internationell webbtidning för ungdomar skall göras är det viktigt att tänka på hur färg och form påverkar våra känslor. Experiment med färg som har gjorts av ledande färgpsykologer och färgvetare som Goethe, Itten, Lüscher och Karl Ryberg bevisar att färg har en stor psykologisk påverkan på människan. Internet är ett väldigt effektiv kommunikationsmedium och för att kommunicera effektivt är det inte bara orden, utan även färgerna och formerna på webbsidorna som skall vara väl genomtänkta. Vilka färger kopplar ungdomar i Sverige och Chile ihop med vissa känslor? Vilka former kopplas ihop med samma känslor? Svaren på de frågorna sammanställdes och analyserades efter svaren på en enkät om färg och form som delades it till två högstadieskolor i Borlänge, Sverige och en i Santiago, Chile.
18

Autonomous Cooperating Web Crawlers

McLearn, Greg January 2002 (has links)
A web crawler provides an automated way to discover web events ? creation, deletion, or updates of web pages. Competition among web crawlers results in redundant crawling, wasted resources, and less-than-timely discovery of such events. This thesis presents a cooperative sharing crawler algorithm and sharing protocol. Without resorting to altruistic practices, competing (yet cooperative) web crawlers can mutually share discovered web events with one another to maintain a more accurate representation of the web than is currently achieved by traditional polling crawlers. The choice to share or merge is entirely up to an individual crawler: sharing is the act of allowing a crawler M to access another crawler's web-event data (call this crawler S), and merging occurs when crawler M requests web-event data from crawler S. Crawlers can choose to share with competing crawlers if it can help reduce contention between peers for resources associated with the act of crawling. Crawlers can choose to merge from competing peers if it helps them to maintain a more accurate representation of the web at less cost than directly polling web pages. Crawlers can control how often they choose to merge through the use of a parameter &#961;, which dictates the percentage of time spent either polling or merging with a peer. Depending on certain conditions, pathological behaviour can arise if polling or merging is the only form of data collection. Simulations of communities of simple cooperating web crawlers successfully show that a combination of polling and merging (0 < &#961; < 1) can allow an individual member of the cooperating community a higher degree of accuracy in their representation of the web as compared to a traditional polling crawler. Furthermore, if web crawlers are allowed to evaluate their own performance, they can dynamically switch between periods of polling and merging to still perform better than traditional crawlers. The mutual performance gain increases as more crawlers are added to the community.
19

Design and Implementation of Indexing Strategies for XML Documents

Lin, Mao-Tong 07 July 2002 (has links)
In recent years, many people use the World Wide Web and Internet to find information that they want. HTML is a document markup language for publishing hypertext on the WWW. HTML has been the target format for content developers around the world. Basically, HTML tags serve the primary purpose of describing how to display a data item. Therefore, HTML documents are difficult to find some useful information. That is because, HTML documents are mixed content with display tags. On the other hand, XML is the another data format for data exchange inter-enterprise applications on the Internet. In order to facilitate data exchange, industry groups define public Document Type Definitions (DTD) that specify the format of the XML documents to be exchanged between their applications. Moreover, WWW/EDI or Electric Commerce is very popular and a lot of business data uses XML to exchange information on the World Wide Web. Basically, XML tags describe the data itself. The contents (meaning) of the XML documents and the display format is separated. It could be easily to find meaningful information of the XML documents and analyze the information. Moreover, when a large volume of business data (XML documents) exists, one way to support the management of the XML documents is to apply the relational databases. For such an approach, we must transform the XML documents to the relational databases. In this thesis, we design and implement the indexing strategies to efficiently access XML documents. XML document is fundamentally different from relational data. XML is a hierarchical and nested document, it is very similar to the semistructured data model. The characteristic of semistructured data is that it may not have a fixed schema and it may be irregular or incomplete. Though, the semistructured data model is flexible in data modeling, it requires a large search space in query processing since there is no schema fixed in advance. Indexing is the way of how to improve query performance efficiently. However, due to the special properties of semistructued data, there are up to five types of queries: (1) complete single path, (2) specified leaf only, (3) specified intrapath, (4) specified attribute/element(value), and (5) multiple paths with the same level. In this thesis, we classify all possible queries into those five query types. Next, we create different indexes for different query types. Moreover, we design and implement the query transformation from XML query statements to SQL statements. Also, we create a user-friendly interface for users to input XML query statements. The whole system is implemented in JAVA and SQL Server 2000. From our experiences, we show that our indexing strategies can improve the XML query processing performance very well.
20

A Study on Information Acquisition Strategies on the World Wide Web

Doong, Her-Sen 13 August 2002 (has links)
The rapid growth of World Wide Web has created a new platform for information exchange. Although WWW makes more data easily available, it has also created many problems such as information overload, disorientation, and reduced quality of data. In order to solve these problems, most current approaches primarily focused on information filtering and searching based on the technical perspective. Few researches have provided analytical results of the information acquisition behaviors over the Web. Based on the cognitive fit theory and task-technology fit (TTF) theory, this study proposes a comprehensive research model to describe the individual information acquisition strategies on the World Wide Web. To examine the research model, a laboratory experiment was performed on a group of 120 students. Sixteen task scenarios were designed and one prototype website was developed according to the specifications defined through the literature review and a pilot study. The research results show that both task structure and system characteristics have an impact upon individual information acquisition behaviors over the website. The experiment also confirms that subjects feel more satisfactions at adopting formal search and purposeful browsing strategies. The understanding effect of the research object is a function of scanning and purposeful browsing strategies. The interaction between task structure and system characteristics is also significant. These findings allow us to have a better understanding of information acquisitions behavior on the Web.

Page generated in 0.0161 seconds