• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 24
  • 12
  • 9
  • 1
  • Tagged with
  • 110
  • 23
  • 22
  • 12
  • 11
  • 10
  • 10
  • 8
  • 8
  • 8
  • 7
  • 7
  • 7
  • 6
  • 6
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
21

Information visibility on the Web and conceptions of success and failure in Web searching

Mansourian, Yazdan January 2006 (has links)
This thesis reports the procedure and findings of an empirical study about end users' interaction with web-based search tools. The first part is dedicated to address early research questions to discover web user's conceptions of the invisible web. The second part addresses primary research questions to explore web users' conceptualizations of the causes of their search success/failure and their awareness of and reaction to missed information while searching the web. The third part is devoted to a number of emergent research questions to reexamine the dataset in the light of a number of theoretical frameworks including Locus of Control, Self-efficacy, Attribution Theory and Bounded Rationality and Satisficing theory. The data collection was carried out in three phases based on in-depth, open-ended and semi-structured interviews with a sample of academic staff, research staff and research students from three biology-related departments at the University of Sheffield. A combination of inductive and deductive approaches was employed to address three sets of research questions. The first part of analysis which was based on Grounded Theory led to discovery of a new concept called 'information visibility' which does make a distinction between technical objective conceptions of the invisible web that commonly appear in the literature, and a cognitive subjective conception based on searchers' perceptions of search failure. Accordingly, the study introduced a 'model of information visibility on the web' which suggests a complementary definition for the invisible web. Inductive exploration of the data to address the primary research questions culminated in identification of different kinds of success (i.e. anticipated, serendipitous, and unexpected success) and failure (i.e. unexpected, unexplained and inevitable failure). The results also showed that the participants in the study were aware of the possibility of missing some relevant information in their searches and the risk of missing potentially important information is a matter of concern to them. However, regarding the context of each search they have different perceptions of the importance and the volume of missed information and accordingly they react to it differently. In view of that, two matrices including the "matrix of search impact" and the "matrix of search depth" were developed to address users' search behaviours regarding their awareness of and reaction to missed information. The matrix of search impact suggests that there are different perceptions of the risk of missing information including "inconsequential", "tolerable", "damaging" and "disastrous". The matrix of search depth illustrates different search strategies including "minimalist", "opportunistic", "nervous" and "extensive". The third part of the study indicated that Locus of Control and Attribution Theory are useful theoretical frameworks for helping us to better understand web-based information seeking. Furthermore, interpretation of the data with regards to Bounded Rationality and Satisficing theory supported the inductive findings and showed that web users' estimations of the likely volume and importance of missed information affect their decision to persist in searching. At the final stage of the study, an integrative model of information seeking behaviour on the web was developed. This six-layer model incorporates the results of both inductive and deductive stages of the study.
22

Detection of unsolicited web browsing with clustering and statistical analysis

Chwalinski, Pawel January 2014 (has links)
Unsolicited web browsing denotes illegitimate accessing or processing web content. The harmful activity varies from extracting e-mail information to downloading entire website for duplication. In addition, computer criminals prevent legitimate users from gaining access to websites by implementing a denial of service attack with high-volume legitimate traffic. These offences are accomplished by preprogrammed machines that avoid rate-dependent intrusion detection systems. Therefore, it is assumed in this thesis that the only difference between a legitimate and malicious web session is in the intention rather than physical characteristics or network-layer information. As a result, the main aim of this research has been to provide a method of malicious intention detection. This has been accomplished by two-fold process. Initially, to discover most recent and popular transitions of lawful users, a clustering method has been introduced based on entropy minimisation. In principle, by following popular transitions among the web objects, the legitimate users are placed in low-entropy clusters, as opposed to the undesired hosts whose transitions are uncommon, and lead to placement in high-entropy clusters. In addition, by comparing distributions of sequences of requests generated by the actual and malicious users across the clusters, it is possible to discover whether or not a website is under attack. Secondly, a set of statistical measurements have been tested to detect the actual intention of browsing hosts. The intention classification based on Bayes factors and likelihood analysis have provided the best results. The combined approach has been validated against actual web traces (i.e. datasets), and generated promising results.
23

Detecting network Quality of Service on a hop-by-hop basis for on-line multimedia application connections

Al-Rawi, Momen M. January 2006 (has links)
With the highly expanding use of networked on-line multimedia applications in general and Voice Over IP applications in specific, it is vital to assess the quality of service of such applications over public or private networks to identify the problems arising in order to enhance the quality of these applications over networks. This is especially important with increasing large companies utilizing their private intranets for the use of telephone calls to save on their phone budget. The quality of service can be addressed at various levels. This work is concerned with identifying the weak links or hops in a network path of an on-line multimedia application session which contributes to the degradation of the quality of the designated application. Once identified, the degraded hop can be dealt with or potentially replaced by another. Alternatively, the routing table can be altered to bypass the degraded hops.
24

Hypertext and the training of library and information studies students

Ramaiah, Chennupati K. January 1993 (has links)
As a result of the introduction of computers into teaching, a number of computer interfaces have been developed and used for teaching students at all levels. Among them, hypertext is one of the best known and most frequently discussed. Hypertext, as a non-sequential presentation of information, is a fairly old concept, but it has only recently become available for teaching purposes in a cheap and flexible form.
25

Assessing relevance using automatically translated documents for cross-language information retrieval

Orengo, Viviane Moreira January 2004 (has links)
This thesis focuses on the Relevance Feedback (RF) process, and the scenario considered is that of a Portuguese-English Cross-Language Information Retrieval (CUR) system. CUR deals with the retrieval of documents in one natural language in response to a query expressed in another language. RF is an automatic process for query reformulation. The idea behind it is that users are unlikely to produce perfect queries, especially if given just one attempt. The process aims at improving the queryspecification, which will lead to more relevant documents being retrieved. The method consists of asking the user to analyse an initial sample of documents retrieved in response to a query and judge them for relevance. In that context, two main questions were posed. The first one relates to the user's ability in assessing the relevance of texts in a foreign language, texts hand translated into their language and texts automatically translated into their language. The second question concerns the relationship between the accuracy of the participant's judgements and the improvement achieved through the RF process. In order to answer those questions, this work performed an experiment in which Portuguese speakers were asked to judge the relevance of English documents, documents hand-translated to Portuguese, and documents automatically translated to Portuguese. The results show that machine translation is as effective as hand translation in aiding users to assess relevance. In addition, the impact of misjudged documents on the performance of RF is overall just moderate, and varies greatly for different query topics. This work advances the existing research on RF by considering a CUR scenario and carrying out user experiments, which analyse aspects of RF and CUR that remained unexplored until now. The contributions of this work also include: the investigation of CUR using a new language pair; the design and implementation of a stemming algorithm for Portuguese; and the carrying out of several experiments using Latent Semantic Indexing which contribute data points to the CUR theory.
26

PowerAqua : open question answering on the semantic web

Lopez, Vanessa January 2011 (has links)
With the rapid growth of semantic information in the Web, the processes of searching and querying These very large amounts of heterogeneous content have become increasingly challenging. This research tackles the problem of supporting users in querying and exploring information across multiple and heterogeneous Semantic Web (SW) sources. A review of literature on ontology-based Question Answering reveals the limitations of existing technology. Our approach is based on providing a natural language Question Answering interface for the SW, PowerAqua. The realization of PowerAqua represents a considerable advance with respect to other systems, which restrict their scope to an ontology-specific or homogeneous fraction of the publicly available SW content. To our knowledge, PowerAqua is the only system that is able to take advantage of the semantic data available on the Web to interpret and answer user queries posed in natural language. In particular, PowerAqua is uniquely able to answer queries by combining and aggregating information, which can be distributed across heterogeneous semantic resources. Here, we provide a complete overview of our work on PowerAqua, including: the research challenges it addresses; its architecture; the techniques we have realised to map queries to semantic data, to integrate partial answers drawn from different semantic resources and to rank alternative answers; and the evaluation studies we have performed, to assess the performance of PowerAqua. We believe our experiences can be extrapolated to a variety of end-user applications that wish to open up to large scale and heterogeneous structured datasets, to be able to exploit effectively what possibly is the greatest wealth of data in the history of Artificial Intelligence.
27

Internet interpersonal communications : an industrial design approach to interfaces and products

Roa, Seungwan January 2004 (has links)
The Internet provides interpersonal communication that does not merely emulate the 'real' world but offers radically innovative design options; this study investigates related theoretical contexts to expound new conclusions which recognise both non-pre-existing needs and long-term concerns from an industrial design perspective. The study consists of a contextual section and a practice-related section, and generates preliminary design recommendations in the contextual section as a result of exploring and reviewing: 1) socio-psychological; 2) socio-technological; and 3) technological contexts related to internet interpersonal communication. The preliminary design recommendations are based on the most significant internet interpersonal communication potential identified in the contextual section: 1) the absence of the physical body, 2) the need for artificial interfaces, 3) requirements of human-to-human interaction, and 4) support of controllability. The practice-related section, utilising simulated practice activity, assesses each preliminary design recommendation in terms of its degree of practicality and efficiency, and concludes with an identification of the most important principles for internet interpersonal communication interface and product design as below: a) To design the interface as an efficient self-presenter considering human-tohuman interaction preferentially, and b) To harmonise the technological provisions and distinct internet interpersonal communication opportunities as a benefit for individual users. The preliminary design recommendations are further revised with respect to their hierarchical relations in connection with the principles above, and it is suggested that 'omni-dimensional interface/design' would be a sensible direction for internet interpersonal communication interface and product design as well as for most design disciplines related to information communications technologies. In addition, industrial designers focusing on service design could offer effective and efficient guidance to an industry in which technology is becoming less tangible and in which multidisciplinary collaboration is necessary
28

Template rule development for information extraction: The net method

Zeranou, Kalliopi January 2008 (has links)
Information Extraction (IE) is becoming increasingly important for the semantic analysis of free-text documents stored in large document repositories, such as the Web. Once free-text is analysed for the recognition of concepts and concept interrelations in events and facts of interest, the resulting structured information becomes a valuable knowledge resource. This resource can be of further use in other information management technologies, such as document summarisation, ontology development, semantic document indexing, question answering, etc., or can be further exploited by data mining and reasoning technologies.
29

Optimised probabilistic data structures for forwarding in information centric networking

Carrea, Laura January 2013 (has links)
In this thesis, a probabilistic approach to the problem of packet forwarding in information centric networks is analysed and further developed. This type of networks are based on information identifiers rather than on the traditional host addresses. The approach is compact forwarding where the Bloom filter is the key method for aggregating forwarding information that allows moving packets at line speed labelled with fiat identifiers. The Bloom filter reduces state at the nodes, simplifies multicast delivery and introduces new trade-offs in the traditional routing and forwarding design space. However) it is a lassy method which produces some potential bandwidth penalties, loops, packet storms, and security issues due to false positives. This thesis focuses on false posit ive control for the probabilistic in-packet forwarding method and proposes two approaches either to reduce false positives or to exploit them in a useful way. One approach consists of a mechanism to carefully select the number of hash functions to use to generate the Bloom filter, The mechanism on average offers the minimum false positive occurrences depending on the traffic along the links. The other approach is a variation of the Bloom filter, the optihash, that can give better performance with respect to the Bloom filter at a cost of slightly more processing. The optihash is constructed with a family of functions that allows an optimisation which can be performed according to different metrics. Two general metrics are proposed in detail and some other, appJicationspeCific, are explored for in-packet forwarding techniques in different types of networks. The time complexity/false positive trade-off is thoroughly investigated and the evaluation of the optihasb as an alternative to the Bloom filter is performed for in-packet compact forwarding.
30

Adaptive domain modelling for information retrieval

Albakour, M-Dyaa January 2012 (has links)
No description available.

Page generated in 0.0833 seconds