• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 10
  • 2
  • 1
  • Tagged with
  • 1325
  • 1313
  • 1312
  • 1312
  • 1312
  • 192
  • 164
  • 156
  • 129
  • 99
  • 93
  • 79
  • 52
  • 51
  • 51
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
371

Regulating the technological actor : how governments tried to transform the technology and the market for cryptography and cryptographic services and the implications for the regulation of information and communications technologies

Hosein, Ian January 2003 (has links)
The formulation, adoption, and transformation of policy involves the interaction of actors as they negotiate, accept, and reject proposals. Traditional studies of policy discourse focus on social actors. By studying cryptography policy discourses, I argue that considering both social and technological actors in detail enriches our understanding of policy discourse. The case-based research looks at the various cryptography policy strategies employed by the governments of the United States of America and the United Kingdom. The research method is qualitative, using hermeneutics to elucidate the various actors’ interpretations. The research aims to understand policy discourse as a contest of principles involving various government actors advocating multiple regulatory mechanisms to maintain their surveillance capabilities, and the reactions of industry actors, non-governmental organisations, parliamentarians, and epistemic communities. I argue that studying socio-technological discourse helps us to understand the complex dynamics involved in regulation and regulatory change. Interests and alignments may be contingent and unstable. As a result, technologies can not be regarded as mere representations of social interests and relationships. By capturing the interpretations and articulations of social and technological actors we may attain a better understanding of the regulatory landscape for information and communications technologies.
372

Foundations research in information retrieval inspired by quantum theory

Arafat, Sachi January 2008 (has links)
In the information age information is useless unless it can be found and used, search engines in our time thereby form a crucial component of research. For something so crucial, information retrieval (IR), the formal discipline investigating search, can be a confusing area of study. There is an underlying difficulty, with the very definition of information retrieval, and weaknesses in its operational method, which prevent it being called a 'science'. The work in this thesis aims to create a formal definition for search, scientific methods for evaluation and comparison of different search strategies, and methods for dealing with the uncertainty associated with user interactions; so that one has the necessary formal foundation to be able to perceive IR as "search science". The key problems restricting a science of search pertain to the ambiguity in the current way in which search scenarios and concepts are specified. This especially affects evaluation of search systems since according to the traditional retrieval approach, evaluations are not repeatable, and thus not collectively verifiable. This is mainly due to the dependence on the method of user studies currently dominating evaluation methodology. This evaluation problem is related to the problem of not being able to formally define the users in user studies. The problem of defining users relates in turn to one of the main retrieval-specific motivations of the thesis, which can be understood by noticing that uncertainties associated with the interpretation of user interactions are collectively inscribed in a relevance concept, the representation and use of which defines the overall character of a retrieval model. Current research is limited in its understanding of how to best model relevance, a key factor restricting extensive formalization of the IR discipline as a whole. Thus, the problems of defining search systems and search scenarios are the principle issues preventing formal comparisons of systems and scenarios, in turn limiting the strength of experimental evaluation. Alternative models of search are proposed that remove the need for ambiguous relevance concepts and instead by arguing for use of simulation as a normative evaluation strategy for retrieval, some new concepts are introduced that can be employed in judging effectiveness of search systems. Included are techniques for simulating search, techniques for formal user modelling and techniques for generating measures of effectiveness for search models. The problems of evaluation and of defining users are generalized by proposing that they are related to the need for an unified framework for defining arbitrary search concepts, search systems, user models, and evaluation strategies. It is argued that this framework depends on a re-interpretation of the concept of search accommodating the increasingly embedded and implicit nature of search on modern operating systems, internet and networks. The re-interpretation of the concept of search is approached by considering a generalization of the concept of ostensive retrieval producing definitions of search, information need, user and system that (formally) accommodates the perception of search as an abstract process that can be physical and/or computational. The feasibility of both the mathematical formalism and physical conceptualizations of quantum theory (QT) are investigated for the purpose of modelling the this abstract search process as a physical process. Techniques for representing a search process by the Hilbert space formalism in QT are presented from which techniques are proposed for generating measures for effectiveness that combine static information such as term weights, and dynamically changing information such as probabilities of relevance. These techniques are used for deducing methods for modelling information need change. In mapping the 'macro level search' process to 'micro level physics' some generalizations were made to the use and interpretation of basic QT concepts such the wave function description of state and reversible evolution of states corresponding to the first and second postulates of quantum theory respectively. Several ways of expressing relevance (and other retrieval concepts) within the derived framework are proposed arguing that the increase in modelling power by use of QT provides effective ways to characterize this complex concept. Mapping the mathematical formalism of search to that of quantum theory presented insightful perspectives about the nature of search. However, differences between the operational semantics of quantum theory and search restricted the usefulness of the mapping. In trying to resolve these semantic differences, a semi-formal framework was developed that is mid-way between a programmatic language, a state-based language resembling the way QT models states, and a process description language. By using this framework, this thesis attempts to intimately link the theory and practice of information retrieval and the evaluation of the retrieval process. The result is a novel, and useful way for formally discussing, modelling and evaluating search concepts, search systems and search processes.
373

Haptic augmentation of the cursor : transforming virtual actions into physical actions

Oakley, Ian January 2003 (has links)
This thesis demonstrates, through the exploration of two very different examples, the general claim that haptic feedback relating to a user's representation in a computer system (typically a cursor) can lead to increases in objective performance and subjective experience. Design guidelines covering each of these two topics are also presented, to ensure that the research described here can be readily adopted by other researchers, designers and system developers. The first topic to be investigated was desktop user interfaces. This thesis describes the design of a variety of different forms of haptic feedback for use with number of different Graphical User Interface (GUI) widgets, or widget groups. Two empirical evaluations of these designs are also described in some depth. The results of these studies indicate that although haptic feedback can provide improvements in objective performance, it can also reduce performance and increase subjective workload if inappropriately applied. From these results, and from the previous literature, detailed guidelines were drawn up covering the addition of haptic feedback to GUIs. The goal of these guidelines is to support the creation of performance-enhancing haptic feedback. The second topic examined was communication in interactive collaborative systems. The design of a suite of haptic communication is presented in detail, before two studies investigating different aspects of its use. The first study focuses on the subjective influence of the haptic communication as a whole, while the second is a more thorough look at one particular form of the feedback and includes objective measurements. The combined results of these studies indicate that haptic feedback has a valuable potential for increasing the quality of a user's subjective experience. Observations from these studies also reveal insights into the role of haptic feedback in communication. A set of guidelines summing up this research and the previous literature relevant to this topic are then presented. As research on this domain is in its infancy, the goal of these guidelines is to concisely present the main issues and potential benefits that respectively restrict and drive this work.
374

Automatic techniques for detecting and exploiting symmetry in model checking

Donaldson, Alastair F. January 2007 (has links)
The application of model checking is limited due to the state-space explosion problem – as the number of components represented by a model increase, the worst case size of the associated state-space grows exponentially. Current techniques can handle limited kinds of symmetry, e.g. full symmetry between identical components in a concurrent system. They avoid the problem of automatic symmetry detection by requiring the user to specify the presence of symmetry in a model (explicitly, or by annotating the associated specification using additional language keywords), or by restricting the input language of a model checker so that only symmetric systems can be specified. Additionally, computing unique representatives for each symmetric equivalence class is easy for these limited kinds of symmetry. We present a theoretical framework for symmetry reduction which can be applied to explicit state model checking. The framework includes techniques for automatic symmetry detection using computational group theory, which can be applied with no additional user input. These techniques detect structural symmetries induced by the topology of a concurrent system, so our framework includes exact and approximate techniques to efficiently exploit arbitrary symmetry groups which may arise in this way. These techniques are also based on computational group theoretic methods. We prove that our framework is logically sound, and demonstrate its general applicability to explicit state model checking. By providing a new symmetry reduction package for the SPIN model checker, we show that our framework can be feasibly implemented as part of a system which is widely used in both industry and academia. Through a study of SPIN users, we assess the usability of our automatic symmetry detection techniques in practice.
375

Word sense disambiguation and information retrieval

Sanderson, Mark January 1996 (has links)
Starting with a review of previous research that attempted to improve the representation of documents in IR systems, this research is reassessed in the light of word sense ambiguity. It will be shown that a number of the attempts' successes or failures were due to the noticing or ignoring of ambiguity. In the review of disambiguation research, many varied techniques for performing automatic disambiguities are introduced. Research on the disambiguating abilities of people is presented also. It has been found that people are inconsistent when asked to disambiguate words and this causes problems when testing the output of an automatic disambiguator. The first of two sets of experiments to investigate the relationship between ambiguity, disambiguation, and IR, involves a technique where ambiguity and disambiguation can be simulated in a document collection. The results of these experiments lead to the conclusions that query size plays an important role in the relationship between ambiguity and IR. Retrievals based on very small queries suffer particularly from ambiguity and benefit most from disambiguation. Other queries, however, contain a sufficient number of words to provide a form of context that implicitly resolves the query word's ambiguities. In general, ambiguity is found to be not as great a problem to IR systems as might have been thought and the errors made by a disambiguator can be more of a problem than the ambiguity it is trying to resolve. In the complementary second set of experiments, a disambiguator is built and tested, it is applied to a document test collection, and an IR system is adjusted to accommodate the sense information in the collection. The conclusions of these experiments are found to broadly confirm those of the previous set.
376

A physically-based muscle and skin model for facial animation

Coull, Alasdair D. January 2006 (has links)
Facial animation is a popular area of research which has been around for over thirty years, but even with this long time scale, automatically creating realistic facial expressions is still an unsolved goal. This work furthers the state of the art in computer facial animation by introducing a new muscle and skin model and a method of easily transferring a full muscle and bone animation setup from one head mesh to another with very little user input. The developed muscle model allows muscles of any shape to be accurately simulated, preserving volume during contraction and interacting with surrounding muscles and skin in a lifelike manner. The muscles can drive a rigid body model of a jaw, giving realistic physically-based movement to all areas of the face. The skin model has multiple layers, mimicking the natural structure of skin and it connects onto the muscle model and is deformed realistically by the movements of the muscles and underlying bones. The skin smoothly transfers underlying movements into skin surface movements and propagates forces smoothly across the face. Once a head model has been set up with muscles and bones, moving this muscle and bone set to another head is a simple matter using the developed techniques. The developed software employs principles from forensic reconstruction, using specific landmarks on the head to map the bone and muscles to the new head model and once the muscles and skull have been quickly transferred, they provide animation capabilities on the new mesh within minutes.
377

Selective web information retrieval

Plachouras, Vasileios January 2006 (has links)
This thesis proposes selective Web information retrieval, a framework formulated in terms of statistical decision theory, with the aim to apply an appropriate retrieval approach on a per-query basis. The main component of the framework is a decision mechanism that selects an appropriate retrieval approach on a per-query basis. The selection of a particular retrieval approach is based on the outcome of an experiment, which is performed before the final ranking of the retrieved documents. The experiment is a process that extracts features from a sample of the set of retrieved documents. This thesis investigates three broad types of experiments. The first one counts the occurrences of query terms in the retrieved documents, indicating the extent to which the query topic is covered in the document collection. The second type of experiments considers information from the distribution of retrieved documents in larger aggregates of related Web documents, such as whole Web sites, or directories within Web sites. The third type of experiments estimates the usefulness of the hyperlink structure among a sample of the set of retrieved Web documents. The proposed experiments are evaluated in the context of both informational and navigational search tasks with an optimal Bayesian decision mechanism, where it is assumed that relevance information exists. This thesis further investigates the implications of applying selective Web information retrieval in an operational setting, where the tuning of a decision mechanism is based on limited existing relevance information and the information retrieval system’s input is a stream of queries related to mixed informational and navigational search tasks. First, the experiments are evaluated using different training and testing query sets, as well as a mixture of different types of queries. Second, query sampling is introduced, in order to approximate the queries that a retrieval system receives, and to tune an ad-hoc decision mechanism with a broad set of automatically sampled queries.
378

An investigation of a remote visual navigation system for a building inspection robot

Paterson, Alastair Mark January 1996 (has links)
The work presented here shows the development of a machine vision algorithm for finding the position of a building inspection robot on the outside of a large building. The reasons for external building inspection are introduced along with the types of tests used. Existing methods are examined giving their limitations in terms of practicality and safety and an alternative using remote access is proposed. The work concentrates on the navigational aspects and shows how one possible solution using machine vision could be implemented and this is compared to similar work carried out elsewhere. The major part of the thesis covers the development of the robot location algorithm starting with the fundamentals of image processing and finishing with the actual robot's position. Different methods of edge detection are investigated and a pixel linking routine is used to group together data in an image that form features and principal lines. The algorithm investigates the use of the lines for detecting vanishing points and tries to identify the features highlighted in the image. The most significant part of the work concentrates on the development of a method of identifying specific features such as a target on the robot and different windows along with a way of matching the features to a computer model of the building thus enabling the position of the robot to be calculated. Results are given showing how the algorithm performed on a model building and robot in the laboratory with various tests using different camera positions, image enhancement and spurious features. The results presented show that the algorithm was capable of finding the position ofa model robot to sufficient accuracy (typically 3% of the size of the robot target) and that the errors measured were predictable. Additional results show how the algorithm performed on a real building and indicate the problems associated with real images with the conclusion that the algorithm will work under a certain range of conditions providing that certain elements of it can be improved.
379

ANTMANET : a novel routing protocol for mobile ad-hoc networks based on ant colony optimisation

Abuhmida, Mabrouka S. January 2017 (has links)
The core aim of this research is to present “ANTMANET” a novel routing protocol for Mobile Ad-Hoc networks. The proposed protocol aims to reduce the network overhead and delay introduced by node mobility in MANETs. There are two techniques embedded in this protocol, the “Local Zone” technique and the “North Neighbour” Table. They take an advantage of the fact that the nodes can obtain their location information by any means to reduce the network overhead during the route discovery phase and reduced the size of the routing table to guarantee faster convergence. ANTMANET is a hybrid Ant Colony Optimisation-based (ACO) routing protocol. ACO is a Swarm Intelligence (SI) routing algorithm that is well known for its high-quality performance compared to other distributed routing algorithms such as Link State and Distance Vector. ANTMANET has been benchmarked in various scenarios against the ACO routing protocol ANTHOCNET and several standard routing protocols including the Ad-Hoc On-Demand Distance Vector (AODV), Landmark Ad-Hoc Routing (LANMAR), and Dynamic MANET on Demand (DYMO). Performance metrics such as overhead, end-to-end delay, throughputs and jitter were used to evaluate ANTMANET performance. Experiments were performed using the QualNet simulator. A benchmark test was conducted to evaluate the performance of an ANTMANET network against an ANTHOCNET network, with both protocols benchmarked against AODV as an established MANET protocol. ANTMANET has demonstrated a notable performance edge when the core algorithm has been optimised using the novel adaptation method that is proposed in this thesis. Based on the simulation results, the proposed protocol has shown 5% less End-to-End delay than ANTHOCNET. In regard to network overhead, the proposed protocol has shown 20% less overhead than ANTHOCNET. In terms of comparative throughputs ANTMANET in its finest performance has delivered 25% more packets than ANTHOCNET. The overall validation results indicate that the proposed protocol was successful in reducing the network overhead and delay in high and low mobility speeds when compared with the AODV, DMO and LANMAR protocols. ANTMANET achieved at least a 45% less delay than AODV, 60% less delay than DYMO and 55% less delay than LANMAR. In terms of throughputs; ANTMANET in its best performance has delivered 35% more packets than AODV, 40% more than DYMO and 45% more than LANMAR. With respect to the network overhead results, ANTMANET has illustrated 65% less overhead than AODV, 70% less than DYMO and 60 % less than LANMAR. Regarding the Jitter, ANTMANET at its best has shown 60% less jitter than AODV, 55% jitter less than DYMO and 50% less jitter than LANMAR.
380

Machine learning techniques for implicit interaction using mobile sensors

Md Noor, Mohammad Faizuddin January 2016 (has links)
Interactions in mobile devices normally happen in an explicit manner, which means that they are initiated by the users. Yet, users are typically unaware that they also interact implicitly with their devices. For instance, our hand pose changes naturally when we type text messages. Whilst the touchscreen captures finger touches, hand movements during this interaction however are unused. If this implicit hand movement is observed, it can be used as additional information to support or to enhance the users’ text entry experience. This thesis investigates how implicit sensing can be used to improve existing, standard interaction technique qualities. In particular, this thesis looks into enhancing front-of-device interaction through back-of-device and hand movement implicit sensing. We propose the investigation through machine learning techniques. We look into problems on how sensor data via implicit sensing can be used to predict a certain aspect of an interaction. For instance, one of the questions that this thesis attempts to answer is whether hand movement during a touch targeting task correlates with the touch position. This is a complex relationship to understand but can be best explained through machine learning. Using machine learning as a tool, such correlation can be measured, quantified, understood and used to make predictions on future touch position. Furthermore, this thesis also evaluates the predictive power of the sensor data. We show this through a number of studies. In Chapter 5 we show that probabilistic modelling of sensor inputs and recorded touch locations can be used to predict the general area of future touches on touchscreen. In Chapter 7, using SVM classifiers, we show that data from implicit sensing from general mobile interactions is user-specific. This can be used to identify users implicitly. In Chapter 6, we also show that touch interaction errors can be detected from sensor data. In our experiment, we show that there are sufficient distinguishable patterns between normal interaction signals and signals that are strongly correlated with interaction error. In all studies, we show that performance gain can be achieved by combining sensor inputs.

Page generated in 0.0299 seconds