• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 763
  • 170
  • 24
  • 21
  • 21
  • 21
  • 21
  • 21
  • 21
  • 6
  • 6
  • 4
  • 1
  • 1
  • Tagged with
  • 2872
  • 2872
  • 2521
  • 2129
  • 1312
  • 553
  • 527
  • 462
  • 443
  • 382
  • 373
  • 306
  • 262
  • 223
  • 208
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
261

A framework for interactive end-user web automation

Eliwa, Essam January 2013 (has links)
This research investigates the feasibility and usefulness of a Web-based model for end-user Web automation. The aim is to empower end users to automate their Web interactions. Web automation is defined here as the study of theoretical and practical techniques for applying an end-user programming model to enable the automation of Web tasks, activities, or interactions. To date, few tools address the issue of Web automation; moreover, their functionality and usage are limited. A novel model is presented, which combines end-user programming techniques and the software tools philosophy with the vision of the “Web as a platform.” The model provided a Web-based environment that enables the rapid creation of efficient and useful Web-oriented automation tools. It consists of a command line for the Web, a shell scripting language, and a repository of Web commands. A framework called Web2Sh (Web 2.0 Shell) has been implemented, which includes the design and implementation of scripting language (WSh) enabling end users to create and customise Web commands. A number of Web2Sh-core Web commands were implemented. There are two techniques for extending the system: developers can implement new core Web commands, and the use of WSh by end users to connect, customise, and parameterise Web commands to create new commands. The feasibility and the usefulness of the proposed model have been demonstrated by implementing several automation scripts using Web2Sh, and by a case study based experiment that was carried out by volunteered participants. The implemented Web2Sh framework provided a novel and realistic environment for creating, customising, and running Web-oriented automation tools.
262

Turn it this way : remote gesturing in video-mediated communication

Kirk, David Stanley January 2007 (has links)
Collaborative physical tasks are working tasks characterised by workers 'in-the-field' who manipulate task artefacts under the guidance of a remote expert. Examples of such interactions include paramedics requiring field-surgery consults from hospital surgeons, soldiers requiring support from distant bomb-disposal experts, technicians inspecting and repairing machinery under the guidance of a chief engineer or scientists examining artefacts with distributed colleagues. This thesis considers the design of technology to support such forms of distributed working. Early research in video-mediated communication (VMC) which sought to support such interactions presumed video links between remote spaces would improve collaboration. The results of these studies however, demonstrated that in such tasks audio-video links alone were unlikely to improve performance beyond that achievable by simpler audio-only links. In explanation of these observations a reading of studies of situated collaborative working practices suggests that to support distributed object-focussed interactions it is beneficial to not only provide visual access to remote spaces but also to present within the task-space the gestural actions of remote collaborators. Remote Gestural Simulacra are advanced video-mediated communication tools that enable remote collaborators to both see and observably point at and gesture around and towards shared task artefacts located at another site. Technologies developed to support such activities have been critiqued; their design often fractures the interaction between the collaborating parties, restricting access to aspects of communication which are commonly used in co-present situations to coordinate interaction and ground understanding. This thesis specifically explores the design of remote gesture tools, seeking to understand how remote representations of gesture can be used during collaborative physical tasks. In a series of lab-based studies, the utility of remote gesturing is investigated, both qualitatively, examining its collaborative function and quantitatively exploring its impact on both facets of task performance and collaborative language. The thesis also discusses how the configuration of remote gesture tools impacts on their usability, empirically comparing various gesture tool designs. The thesis constructs and examines an argument that remote gesture tools should be designed from a 'mixed ecologies' perspective (theoretically alleviating the problems engendered by 'fractured ecologies' in which collaborating partners are given access to the most salient and relevant features of communicative action that are utilised in face-to-face interaction, namely mutual and reciprocal awareness of commonly understood object-focussed actions (hand-based gestures) and mutual and reciprocal awareness of task-space perspectives. The thesis demonstrates experimental support for this position and concludes by presenting discussion of how the findings generated from the thesis research can be used to guide the design of future iterations of remote gesture tools, and presents directions for areas of further research.
263

Visual demand evaluation methods for in-vehicle interfaces

Pettitt, Michael Andrew January 2008 (has links)
Advancements in computing technology have been keenly felt in the automotive industry. Novel in-car systems have the potential to substantially improve the safety, efficiency and comfort of the driving experience. However, they must be carefully designed, so their use does not dangerously distract drivers from fundamental, safety-critical driving tasks. Distraction is a well-established causal factor in road accidents. A concern is that the introduction of new in-vehicle technology may increase exposure to distraction, and lead to an increase in distraction-related accidents. The range of systems often termed In-Vehicle Information Systems (IVIS), encompassing navigation and entertainment systems, in-car email and Internet, are the focus of this thesis, since they are commonly associated with long tasks that are not considered fundamentally relevant to driving. A variety of Human-Computer Interaction (HCI) and Human Factors methods has been employed to assess the potential distraction of IVIS task engagement. These include on-road evaluations, driving simulator studies, and surrogate methods, such as peripheral detection tasks and static task time assessments. The occlusion technique is one such surrogate, where task performance is assessed under intermittent vision conditions. Participants complete a task with 1.5-second vision periods, followed by a period where their vision is occluded. In this way, the technique evaluates how visually demanding a task is, mimicking the behaviour of glancing to and from the forward road scene when driving and performing IVIS tasks. An evaluation of the technique's validity is presented. Sixteen participants performed two tasks on two systems under three conditions: static (full-vision), static (occlusion), and, whilst driving. Results confirmed other research, concluding that the technique is valid. However, the method's assessment through user-trials based on measures of human performance is problematic. Such trials require robust, reliable prototype systems, and can therefore only take place in later design stages. Consequently, the economic effectiveness of the technique is questionable. The keystroke-level model (KLM), which predicts task times for error-free performance by expert users in routine tasks, provides an alternative quantitative assessment method to user-trials. Tasks are decomposed into their most primitive actions, termed operators, which are associated with empirically assessed time values. These values are then summed to predict performance times. An evaluation of the technique in a vehicle environment is presented; twelve participants performed eleven tasks on two in-car entertainment systems, and task times were compared with KLM predictions. Results demonstrate the technique remains valid beyond its original, desktop computing based context. However, the traditional KLM predicts static task time only, and an extended procedure is required to consider occluded task performance. Two studies are presented, seeking to extend the KLM in order to model task performance under the interrupted vision conditions of occlusion trials. In the first, predictions of occlusion metrics are compared with results from the earlier occlusion assessment. In the second, twelve participants performed three tasks on two IVIS systems under occlusion conditions. Results were subsequently compared with predicted values. Both studies conclude that the extended KLM approach produces valid predictions of occlusion methods, with error rates generally within 20% of observed values, in line with expectations for KLM predictions. Subsequently, a case study is presented, to demonstrate the technique's reliability. The results of an independent occlusion study of two IVIS tasks are compared with predictions made by a HCI expert trained in the application of the extended KLM. Error rates for this study were equally low, leading to the conclusion that the extended KLM appears reliable, though further studies are required. It is concluded that the extended-KLM technique is a valid, reliable and economical method for assessing the visual demand of IVIS tasks. In contrast to many user-trial methods, the technique can be applied in early design stages. In addition, future work areas are identified, which could serve to further enhance the validity, reliability and economy of the technique. These include, automating the extended KLM procedure with a software tool, and, the development of new cognitive and physical operators, and new assumptions, specific to IVIS and/or occlusion conditions. For example, it will be useful to develop new cognitive operators that consider the time taken to visually reorient to complex displays following occluded periods.
264

The data integrity problem and multi-layered document integrity

Moss, Ben January 2007 (has links)
Data integrity is a fundamental aspect of computer security that has attracted much interest in recent decades. Despite a general consensus for the meaning of the problem, the lack of a formal definition has led to spurious claims such as "tamper proof", "prevent tampering", and "tamper protection", which are all misleading in the absence of a formal definition. Ashman recently proposed a new approach for protecting the integrity of a document that claims the ability to detect, locate, and correct tampering. If determining integrity is only part of the problem, then a more general notion of data integrity is needed. Furthermore, in the presence of a persistent tamperer, the problem is more concerned with maintaining and proving the integrity of data, rather than determining it. This thesis introduces a formal model for the more general notion of data integrity by providing a formal problem semantics for its sub-problems: detection, location, correction, and prevention. The model is used to reason about the structure of the data integrity problem and to prove some fundamental results concerning the security and existence of schemes that attempt to solve these sub-problems. Ashman's original multi-layered document integrity (MLDI) paper [1] is critically evaluated, and several issues are highlighted. These issues are investigated in detail, and a series of algorithms are developed to present the MLDI schemes. Several factors that determine the feasibility of Ashman's approach are identified in order to prove certain theoretical results concerning the efficacy of MLDI schemes.
265

Practical mobile ad hoc networks for large scale cattle monitoring

Wietrzyk, Bartosz January 2008 (has links)
This thesis is concerned with identification of realistic requirements for the cattle monitoring system and design of the practical architecture addressing these requirements. Automated monitoring of cattle with wireless monitoring devices mounted on the animals can increase efficiency of cattle production, decrease its reliance on human labour and thus increase its profitability. Multi-hop ad hoc wireless communication has the potential to increase battery life of the animal mounted devices, decrease their size and combat disconnections. This thesis reveals that no current approach sufficiently addresses energy constrains of the animal mounted devices and potential disconnections. We propose a delay tolerant store and forward architecture that provides data retention, detecting custom events, issues notifications, answers remote and in-situ queries, based on requirements identified during field experiments we conducted. This architecture utilizes fixed infrastructure but also works in ad hoc infrastructureless conditions. The core of the proposed architecture, Mobile Ad Hoc Network (MANET) communication, provides offloading data for long term storage by sending data to farm servers via sinks that are a part of MANET and handles in-situ queries issued by users collocated with the animals. The proposed MANET routing algorithm addresses high mobility of nodes and disconnections. It provides lower and more balanced energy usage, shorter delays and increased success ratio of delivering answers to in-situ queries than more generic existing approaches. Problems of large scale deployment of the envisaged system are also addressed. We discuss the necessary configuration process performed during the system installation as well as pervasive mobile and home access to the target system. We propose cost efficient strategies for sinks installation and connecting sinks to farm servers, adaptive to different requirements, estates layout, available infrastructure and existing human and vehicle mobility. We also propose a cost efficient security model for the target system based on public key cryptography.
266

Modelling tools and methodologies for rapid protocell prototyping

Smaldon, James January 2011 (has links)
The field of unconventional computing considers the possibility of implementing computational devices using novel paradigms and materials to produce computers which may be more efficient, adaptable and robust than their silicon based counterparts. The integration of computation into the realms of chemistry and biology will allow the embedding of engineered logic into living systems and could produce truly ubiquitous computing devices. Recently, advances in synthetic biology have resulted in the modification of microorganism genomes to create computational behaviour in living cells, so called “cellular computing”. The cellular computing paradigm offers the possibility of intelligent bacterial agents which may respond and communicate with one another according to chemical signals received from the environment. However, the high levels of complexity when altering an organism which has been well adapted to certain environments over millions of years of evolution suggests an alternative approach in which chemical computational devices can be constructed completely from the bottom up, allowing the designer exquisite control and knowledge about the system being created. This thesis presents the development of a simulation and modelling framework to aid the study and design of bottom-up chemical computers, involving the encapsulation of computational re-actions within vesicles. The new “vesicle computing” paradigm is investigated using a sophisticated multi-scale simulation framework, developed from mesoscale, macroscale and executable biology techniques.
267

Real-time guarantees in high-level agent programming languages

Vikhorev, Konstantin January 2011 (has links)
In the thesis we present a new approach to providing soft real-time guarantees for Belief-Desire-Intention (BDI) agents. We analyse real-time guarantees for BDI agents and show how these can be achieved within a generic BDI programming framework. As an illustration of our approach, we develop a new agent architecture, called AgentSpeak(RT), and its associated programming language, which allows the development of real-time BDI agents. AgentSpeak(RT) extends AgentSpeak(L) [28] intentions with deadlines which specify the time by which the agent should respond to an event, and priorities which specify the relative importance of responding to a particular event. The AgentSpeak(RT) interpreter commits to a priority-maximal set of intentions: a set of intentions that is maximally feasible while preferring higher priority intentions. Real-time tasks can be freely mixed with tasks for which no deadline and/or priority has been specified, and if no deadlines and priorities are specified, the behavior of the agent defaults to that of a non real-time BDI agent. We perform a detailed case study of the use of AgentSpeak(RT) to demonstrate its advantages. This case study involves the development of an intelligent control system for a simple model of a nuclear power plant. We also prove some properties of the AgentSpeak(RT) architecture such as guaranteed reactivity delay of the AgentSpeak(RT) interpreter and probabilistic guarantees of successful execution of intentions by their deadlines. We extend the AgentSpeak(RT) architecture to allow the parallel execution of intentions. We present a multitasking approach to the parallel execution of intentions in the AgentSpeak(RT) architecture. We demonstrate advantages of parallel execution of intentions in AgentSpeak(RT) by showing how it improves behaviour of the intelligent control system for the nuclear power plant. We prove real-time guarantees of the extended AgentSpeak(RT) architecture. We present a characterisation of real-time task environments for an agent, and describe how it relates to AgentSpeak(RT) execution time profiles for a plan and an action. We also show a relationship between the estimated execution time of a plan in a particular environment and the syntactic complexity of an agent program.
268

Documents as functions

Lumley, John William January 2012 (has links)
Treating variable data documents as functions over their data bindings opens opportunities for building more powerful, robust and flexible document architectures to meet the needs arising from the confluence of developments in document engineering, digital printing technologies and marketing analysis. This thesis describes a combination of several XML-based technologies both to represent and to process variable documents and their data, leading to extensible, high-quality and 'higher-order' document generation solutions. The architecture (DDF) uses XML uniformly throughout the documents and their processing tools with interspersing of different semantic spaces being achieved through namespacing. An XML-based functional programming language (XSLT) is used to describe all intra-document variability and for implementing most of the tools. Document layout intent is declared within a document as a hierarchical set of combinators attached to a tree-based graphical presentation. Evaluation of a document bound to an instance of data involves using a compiler to create an executable from the document, running this with the data instance as argument to create a new document with layout intent described, followed by resolution of that layout by an extensible layout processor. The use of these technologies, with design paradigms and coding protocols, makes it possible to construct documents that not only have high flexibility and quality, but also perform in higher-order ways. A document can be partially bound to data and evaluated, modifying its presentation and still remaining variably responsive to future data. Layout intent can be re-satisfied as presentation trees are modified by programmatic sections embedded within them. The key enablers are described and illustrated through example.
269

Towards a formally verified functional quantum programming language

Green, Alexander S. January 2010 (has links)
This thesis looks at the development of a framework for a functional quantum programming language. The framework is first developed in Haskell, looking at how a monadic structure can be used to explicitly deal with the side-effects inherent in the measurement of quantum systems, and goes on to look at how a dependently-typed reimplementation in Agda gives us the basis for a formally verified quantum programming language. The two implementations are not in themselves fully developed quantum programming languages, as they are embedded in their respective parent languages, but are a major step towards the development of a full formally verified, functional quantum programming language. Dubbed the “Quantum IO Monad”, this framework is designed following a structural approach as given by a categorical model of quantum computation.
270

The structured development of virtual environments : enhancing functionality and interactivity

Eastgate, Richard Mark January 2001 (has links)
Desktop Virtual Reality (VR) is an easy and affordable way to implement VR technology within an organisation. It provides an experience that can be shared by many people, and its 3D, interactive capability facilitates the communication of ideas not possible using other media formats. There are a number of software toolkits available for the building and programming of Virtual Environments (VEs), but very few resources that can help developers acquire the skills and techniques required to give their VEs utility and usability. This thesis reviews existing research into VE design with an emphasis on interactivity and usability, and then uses a case study based approach to conceptualise the VE development process and develop exemplar guidance tools. The first group of case studies date from the early 1990s, with an emphasis on finding ways to build VEs incorporating functionality. The experience gained through these case studies was used to discover the issues most relevant to the VE developer and report on the techniques used to resolve them. Several models are then presented to explain these techniques and relate them to the VE development context. For the second set of case studies the emphasis moves to finding ways of making VEs more usable. Several approaches are presented and further conceptualisation results in a decision table based guidance tool. The third set of case studies was carried out within the framework provided by the Virtual Environment Development Structure (VEDS), developed jointly by the author and other members of the Virtual Reality Applications Research Team (VIRART) at the University of Nottingham. In the light of this practical application of the framework and the experience gained throughout the case studies, changes are made to the structure to make it more accurately represent the actual process employed by VE developers. This version of VEDS is then used to more effectively define the areas where VE development guidance tools are needed. Using this information, and based on the experience acquired and the techniques developed throughout this research, three exemplar tools are presented.

Page generated in 0.0746 seconds