• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 10
  • 2
  • 1
  • Tagged with
  • 1325
  • 1313
  • 1312
  • 1312
  • 1312
  • 192
  • 164
  • 156
  • 129
  • 99
  • 93
  • 79
  • 52
  • 51
  • 51
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
811

Theoretical evaluation of XML retrieval

Blanke, Tobias January 2011 (has links)
This thesis develops a theoretical framework to evaluate XML retrieval. XML retrieval deals with retrieving those document parts that specifically answer a query. It is concerned with using the document structure to improve the retrieval of information from documents by only delivering those parts of a document an information need is about. We define a theoretical evaluation methodology based on the idea of `aboutness' and apply it to XML retrieval models. Situation Theory is used to express the aboutness proprieties of XML retrieval models. We develop a dedicated methodology for the evaluation of XML retrieval and apply this methodology to five XML retrieval models and other XML retrieval topics such as evaluation methodologies, filters and experimental results.
812

A hierarchical active binocular robot vision architecture for scene exploration and object appearance learning

Aragon Camarasa, Gerardo January 2012 (has links)
This thesis presents an investigation of a computational model of hierarchical visual behaviours within an active binocular robot vision architecture. The robot vision system is able to localise multiple instances of the same object class, while simultaneously maintaining vergence and directing its gaze to attend and recognise objects within cluttered, complex scenes. This is achieved by implementing all image analysis in an egocentric symbolic space without creating explicit pixel-space maps and without the need for calibration or other knowledge of the camera geometry. One of the important aspects of the active binocular vision paradigm requires that visual features in both camera eyes must be bound together in order to drive visual search to saccade, locate and recognise putative objects or salient locations in the robot's field of view. The system structure is based on the “attentional spotlight” metaphor of biological systems and a collection of abstract and reactive visual behaviours arranged in a hierarchical structure. Several studies have shown that the human brain represents and learns objects for recognition by snapshots of 2-dimensional views of the imaged scene that happens to contain the object of interest during active interaction (exploration) of the environment. Likewise, psychophysical findings specify that the primate’s visual cortex represents common everyday objects by a hierarchical structure of their parts or sub-features and, consequently, recognise by simple but imperfect 2D view object part approximations. This thesis incorporates the above observations into an active visual learning behaviour in the hierarchical active binocular robot vision architecture. By actively exploring the object viewing sphere (as higher mammals do), the robot vision system automatically synthesises and creates its own part-based object representation from multiple observations while a human teacher indicates the object and supplies a classification name. Its is proposed to adopt the computational concepts of a visual learning exploration mechanism that controls the accumulation of visual evidence and directs attention towards the spatial salient object parts. The behavioural structure of the binocular robot vision architecture is loosely modelled by a WHAT and WHERE visual streams. The WHERE stream maintains and binds spatial attention on the object part coordinates that egocentrically characterises the location of the object of interest and extracts spatio-temporal properties of feature coordinates and descriptors. The WHAT stream either determines the identity of an object or triggers a learning behaviour that stores view-invariant feature descriptions of the object part. Therefore, the robot vision is capable to perform a collection of different specific visual tasks such as vergence, detection, discrimination, recognition localisation and multiple same-instance identification. This classification of tasks enables the robot vision system to execute and fulfil specified high-level tasks, e.g. autonomous scene exploration and active object appearance learning.
813

A generic approach to the evolution of interaction in ubiquitous systems

McBryan, Tony January 2011 (has links)
This dissertation addresses the challenge of the configuration of modern (ubiquitous, context-sensitive, mobile et al.) interactive systems where it is difficult or impossible to predict (i) the resources available for evolution, (ii) the criteria for judging the success of the evolution, and (iii) the degree to which human judgements must be involved in the evaluation process used to determine the configuration. In this thesis a conceptual model of interactive system configuration over time (known as interaction evolution) is presented which relies upon the follow steps; (i) identification of opportunities for change in a system, (ii) reflection on the available configuration alternatives, (iii) decision-making and (iv) implementation, and finally iteration of the process. This conceptual model underpins the development of a dynamic evolution environment based on a notion of configuration evaluation functions (hereafter referred to as evaluation functions) that provides greater flexibility than current solutions and, when supported by appropriate tools, can provide a richer set of evaluation techniques and features that are difficult or impossible to implement in current systems. Specifically this approach has support for changes to the approach, style or mode of use used for configuration - these features may result in more effective systems, less effort involved to configure them and a greater degree of control may be offered to the user. The contributions of this work include; (i) establishing the the need for configuration evolution through a literature review and a motivating case study experiment, (ii) development of a conceptual process model supporting interaction evolution, (iii) development of a model based on the notion of evaluation functions which is shown to support a wide range of interaction configuration approaches, (iv) a characterisation of the configuration evaluation space, followed by (v) an implementation of these ideas used in (vi) a series of longitudinal technology probes and investigations into the approaches.
814

An empirical investigation into strategies for guiding interactive search

Brumby, Duncan Paul January 2005 (has links)
One activity people engage in when using the web is estimating the likelihood that labelled links will lead to their goal. However, they must also decide whether to select one of the assessed items immediately or make further assessments. There are a number of theoretical accounts of this behaviour. The accounts differ as to whether, for example, they assume that people consider all of the items on a page prior to making a selection, or tend to make a selection immediately following an assessment of a highly relevant item. A series of experiments were conducted to discriminate between these accounts. The empirical studies demonstrated that people are in fact more strategic and sensitive to context than previous models suggest. People sometimes choose an option which appears good enough, but sometimes choose to continue checking. The decision to select an item was found to be sensitive to the relevance of labels in the immediate and distal choice set and also the number of options in the immediately available choice set. The data were used to motivate computational models of interactive search. An implication of the work presented here is that engineering models that aim to predict the time required by a typical user to search web page structure or which labelled link a user is likely to select for a given goal need to be updated so that they are sensitive to the extent to which people adapt strategy to the features of the context, such as the distractor semantics and number of distractors
815

Analysing web-based malware behaviour through client honeypots

Alosefer, Yaser January 2012 (has links)
With an increase in the use of the internet, there has been a rise in the number of attacks on servers. These attacks can be successfully defended against using security technologies such as firewalls, IDS and anti-virus software, so attackers have developed new methods to spread their malicious code by using web pages, which can affect many more victims than the traditional approach. The attackers now use these websites to threaten users without the user’s knowledge or permission. The defence against such websites is less effective than traditional security products meaning the attackers have the advantage of being able to target a greater number of users. Malicious web pages attack users through their web browsers and the attack can occur even if the user only visits the web page; this type of attack is called a drive-by download attack. This dissertation explores how web-based attacks work and how users can be protected from this type of attack based on the behaviour of a remote web server. We propose a system that is based on the use of client Honeypot technology. The client Honeypot is able to scan malicious web pages based on their behaviour and can therefore work as an anomaly detection system. The proposed system has three main models: state machine, clustering and prediction models. All these three models work together in order to protect users from known and unknown web-based attacks. This research demonstrates the challenges faced by end users and how the attacker can easily target systems using drive-by download attacks. In this dissertation we discuss how the proposed system works and the research challenges that we are trying to solve, such as how to group web-based attacks into behaviour groups, how to avoid attempts at obfuscation used by attackers and how to predict future malicious behaviour for a given web-based attack based on its behaviour in real time. Finally, we have demonstrate how the proposed system will work by implementing a prototype application and conducting a number of experiments to show how we were able to model, cluster and predict web-based attacks based on their behaviour. The experiment data was collected randomly from online blacklist websites.
816

Location aware data aggregation for efficient message dissemination in Vehicular Ad Hoc Networks

Milojevic, M. January 2015 (has links)
The main contribution of this thesis is the LA mechanism - an intelligent, locationaware data aggregation mechanism for real-time observation, estimation and efficient dissemination of messages in VANETs. The proposed mechanism is based on a generic modelling approach which makes it applicable to any type of VANET applications. The data aggregation mechanism proposed in this thesis introduces location awareness technique which provides dynamic segmentation of the roads enabling efficient spatiotemporal database indexing. It further provides the location context to the messages without the use of advanced positioning systems like satellite navigation and digital maps. The mechanism ensures that the network load is significantly reduced by using the passive clustering and adaptive broadcasting to minimise the number of exchanged messages. The incoming messages are fused by Kalman filter providing the optimal estimation particularly useful in urban environment where incoming measurements are very frequent and can cause the vehicle to interpret them as noisy measurements. The scheme allows the comparison of aggregates and single observations which enables their merging and better overall accuracy. Old information in aggregates is removed by realtime database refreshing leaving only newer relevant information for a driver to make real-time decisions in traffic. The LA mechanism is evaluated by extensive simulations to show efficiency and accuracy.
817

Hacking into the emotion-creativity link : two new approaches to interactive systems that influence the relationship between emotion and creativity

de Rooij, A. January 2016 (has links)
Emotions can influence creative thinking. The ability of people to have the emotions that augment creativity can therefore help them to achieve higher creative task performance. How to design interactive systems that can effectively make use of this potential is, however, still an unanswered question. To explore possible answers to this question we have developed two novel approaches to interactive systems that can be used to effectively hack into the emotion-creativity link. One approach we developed enables a system to hack into the function of motor expressions in emotion regulation, in order to regulate the emotions that happen spontaneously during a creative task. We demonstrate that embodied interactions designed based on motor expressions, while used to interact with a system, can influence an intended emotion, and thereby influence the relationship between emotion and creativity. The second approach that we developed enables a system to hack into the cognitive appraisal processes that help cause emotion during a creative task. We demonstrate that believable computer generated feedback about the originality of a user’s own ideas, can be manipulated to help cause an intended emotion, determine its intensity, and thereby also influence the relationship between emotion and creativity. The contribution of this thesis is the development of two novel approaches to interactive systems that aim to influence the emotion-creativity link and in particular the explication of the mechanisms underlying these approaches. The studies form a novel contribution to both interactive systems research and the creativity sciences.
818

The application of multiple modalities to improve home care and reminder systems

Warnock, David January 2014 (has links)
Existing home care technology tends to be pre-programmed systems limited to one or two interaction modalities. This can make them inaccessible to people with sensory impairments and unable to cope with a dynamic and heterogeneous environment such as the home. This thesis presents research that considers how home care technology can be improved through employing multiple visual, aural, tactile and even olfactory interaction methods. A wide range of modalities were tested to gather a better insight into their properties and merits. That information was used to design and construct Dyna-Cue, a prototype multimodal reminder system. Dyna-Cue was designed to use multiple modalities and to switch between them in real time to maintain higher levels of effectiveness and acceptability. The Dyna-Cue prototype was evaluated against other models of reminder delivery and was shown to be an effective and appropriate tool that can help people to manage their time and activities.
819

Profiling a parallel domain specific language using off-the-shelf tools

Al-Saeed, Majed Mohammed Abdullah January 2015 (has links)
Profiling tools are essential for understanding and tuning the performance of both parallel programs and parallel language implementations. Assessing the performance of a program in a language with high-level parallel coordination is often complicated by the layers of abstraction present in the language and its implementation. This thesis investigates whether it is possible to profile parallel Domain Specific Languages (DSLs) using existing host language profiling tools. The key challenge is that the host language tools report the performance of the DSL runtime system (RTS) executing the application rather than the performance of the DSL application. The key questions are whether a correct, effective and efficient profiler can be constructed using host language profiling tools; is it possible to effectively profile the DSL implementation, and what capabilities are required of the host language profiling tools? The main contribution of this thesis is the development of an execution profiler for the parallel DSL, Haskell Distributed Parallel Haskell (HdpH) using the host language profiling tools. We show that it is possible to construct a profiler (HdpHProf) to support performance analysis of both the DSL applications and the DSL implementation. The implementation uses several new GHC features, including the GHC-Events Library and ThreadScope, develops two new performance analysis tools for DSL HdpH internals, i.e. Spark Pool Contention Analysis, and Registry Contention Analysis. We present a critical comparative evaluation of the host language profiling tools that we used (GHC-PPS and ThreadScope) with another recent functional profilers, EdenTV, alongside four important imperative profilers. This is the first report on the performance of functional profilers in comparison with well established industrial standard imperative profiling technologies. We systematically compare the profilers for usability and data presentation. We found that the GHC-PPS performs well in terms of overheads and usability so using it to profile the DSL is feasible and would not have significant impact on the DSL performance. We validate HdpHProf for functional correctness and measure its performance using six benchmarks. HdpHProf works correctly and can scale to profile HdpH programs running on up to 192 cores of a 32 nodes Beowulf cluster. We characterise the performance of HdpHProf in terms of profiling data size and profiling execution runtime overhead. It shows that HdpHProf does not alter the behaviour of the GHC-PPS and retains low tracing overheads close to the studied functional profilers; 18% on average. Also, it shows a low ratio of HdpH trace events in GHC-PPS eventlog, less than 3% on average. We show that HdpHProf is effective and efficient to use for performance analysis and tuning of the DSL applications. We use HdpHProf to identify performance issues and to tune the thread granularity of six HdpH benchmarks with different parallel paradigms, e.g. divide and conquer, flat data parallel, and nested data parallel. This include identifying problems such as, too small/large thread granularity, problem size too small for the parallel architecture, and synchronisation bottlenecks. We show that HdpHProf is effective and efficient for tuning the parallel DSL implementation. We use the Spark Pool Contention Analysis tool to examine how the spark pool implementation performs when accessed concurrently. We found that appropriate thread granularity can significantly reduce both conflict ratios, and conflict durations, by more than 90%. We use the Registry Contention Analysis tool to evaluate three alternatives of the registry implementations. We found that the tools can give a better understanding of how different implementations of the HdpH RTS perform.
820

A complete reified temporal logic and its applications

Zhao, Guoxing January 2008 (has links)
Temporal representation and reasoning plays a fundamental and increasingly important role in some areas of Computer Science and Artificial Intelligence. A natural approach to represent and reason about time-dependent knowledge is to associate them with instantaneous time points and/or durative time intervals. In particular, there are various ways to use logic formalisms for temporal knowledge representation and reasoning. Based on the chosen logic frameworks, temporal theories can be classified into modal logic approaches (including prepositional modal logic approaches and hybrid logic approaches) and predicate logic approaches (including temporal argument methods and temporal reification methods). Generally speaking, the predicate logic approaches are more expressive than the modal logic approaches and among predicate logic approaches, temporal reification methods are even more expressive for representing and reasoning about general temporal knowledge. However, the current reified temporal logics are so complicate that each of them either do not have a clear definition of its syntax and semantics or do not have a sound and complete axiomatization. In this thesis, a new complete reified temporal logic (CRTL) is introduced which has a clear syntax, semantics, and a complete axiomatic system by inheriting from the initial first order language. This is the main improvement made to the reification approaches for temporal representation and reasoning. It is a true reified logic since some meta-predicates are formally defined that allow one to predicate and quantify over prepositional terms, and therefore provides the expressive power to represent and reason about both temporal and non-temporal relationships between prepositional terms. For a special case, the temporal model of the simplified CRTL system (SCRTL) is defined as scenarios and graphically represented in terms of a directed, partially weighted or attributed, simple graph. Therefore, the problem of matching temporal scenarios is transformed into conventional graph matching. For the scenario graph matching problem, the traditional eigen-decomposition graph matching algorithm and the symmetric polynomial transform graph matching algorithm are critically examined and improved as two new algorithms named meta-basis graph matching algorithm and sort based graph matching algorithm respectively, where the meta-basis graph matching algorithm works better for 0-1 matrices while the sort based graph matching algorithm is more suitable for continuous real matrices. Another important contribution is the node similarity graph matching framework proposed in this thesis, based on which the node similarity graph matching algorithms can be defined, analyzed and extended uniformly. We prove that that all these node similarity graph matching algorithms fail to work for matching circles.

Page generated in 0.0484 seconds