• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 78
  • 53
  • 45
  • 26
  • 7
  • 7
  • 4
  • 4
  • 2
  • 2
  • 1
  • 1
  • 1
  • 1
  • 1
  • Tagged with
  • 497
  • 209
  • 208
  • 208
  • 208
  • 208
  • 77
  • 77
  • 57
  • 52
  • 49
  • 42
  • 40
  • 39
  • 25
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
71

A genetic programming hyper-heuristic approach to automated packing

Hyde, Matthew January 2010 (has links)
This thesis presents a programme of research which investigated a genetic programming hyper-heuristic methodology to automate the heuristic design process for one, two and three dimensional packing problems. Traditionally, heuristic search methodologies operate on a space of potential solutions to a problem. In contrast, a hyper-heuristic is a heuristic which searches a space of heuristics, rather than a solution space directly. The majority of hyper-heuristic research papers, so far, have involved selecting a heuristic, or sequence of heuristics, from a set pre-defined by the practitioner. Less well studied are hyper-heuristics which can create new heuristics, from a set of potential components. This thesis presents a genetic programming hyper-heuristic which makes it possible to automatically generate heuristics for a wide variety of packing problems. The genetic programming algorithm creates heuristics by intelligently combining components. The evolved heuristics are shown to be highly competitive with human created heuristics. The methodology is first applied to one dimensional bin packing, where the evolved heuristics are analysed to determine their quality, specialisation, robustness, and scalability. Importantly, it is shown that these heuristics are able to be reused on unseen problems. The methodology is then applied to the two dimensional packing problem to determine if automatic heuristic generation is possible for this domain. The three dimensional bin packing and knapsack problems are then addressed. It is shown that the genetic programming hyper-heuristic methodology can evolve human competitive heuristics, for the one, two, and three dimensional cases of both of these problems. No change of parameters or code is required between runs. This represents the first packing algorithm in the literature able to claim human competitive results in such a wide variety of packing domains.
72

Reasoning about resource-bounded multi-agent systems

Nguyen, Nguyen January 2011 (has links)
The thesis presents logic-based formalisms for modelling and reasoning about resource-bounded multi-agent systems. In the field of multi-agent system, it is well-known that temporal logics such as CTL and ATL are powerful tools for reasoning about multi-agent systems. However, there is no natural way to utilise these logics for expressing and reasoning about properties of multi-agent systems where actions of agents require resources to be able to perform. This thesis extends logics including Computational Tree Logic (CTL), Coalition Logic (CL) and Alternating-time Temporal Logic (ATL) which have been used to reasoning about multi-agent systems so that the extended ones have the power to specify and to reason about properties of resource-bounded multi-agent systems. While the extension of CTL is adapted for specifying and reasoning about properties of systems of resource-bounded reasoners where the resources are explicitly memory, communication and time, the extensions of CL and ATL are generalised so that any resource-bounded multi-agent system can be modelled, specified and reasoned about. For each of the logics, we describe the range of resource-bounded multi-agent systems they can account for and axiomatisation systems for reasoning which are proved to be sound and complete. Moreover, we also study the satisfiability problem of these logics.
73

Towards safe and efficient functional reactive programming

Sculthorpe, Neil January 2011 (has links)
Functional Reactive Programming (FRP) is an approach to reactive programming where systems are structured as networks of functions operating on time-varying values (signals). FRP is based on the synchronous data-flow paradigm and supports both continuous-time and discrete-time signals (hybrid systems). What sets FRP apart from most other reactive languages is its support for systems with highly dynamic structure (dynamism) and higher-order reactive constructs (higher-order data-flow). However, the price paid for these features has been the loss of the safety and performance guarantees provided by other, less expressive, reactive languages. Statically guaranteeing safety properties of programs is an attractive proposition. This is true in particular for typical application domains for reactive programming such as embedded systems. To that end, many existing reactive languages have type systems or other static checks that guarantee domain-specific constraints, such as feedback being well-formed (causality analysis). However, compared with FRP, they are limited in their capacity to support dynamism and higher-order data-flow. On the other hand, as established static techniques do not suffice for highly structurally dynamic systems, FRP generally enforces few domain-specific constraints, leaving the FRP programmer to manually check that the constraints are respected. Thus, there is currently a trade-off between static guarantees and dynamism among reactive languages. This thesis contributes towards advancing the safety and efficiency of FRP by studying highly structurally dynamic networks of functions operating on mixed (yet distinct) continuous-time and discrete-time signals. First, an ideal denotational semantics is defined for this kind of FRP, along with a type system that captures domain-specific constraints. The correctness and practicality of the language and type system are then demonstrated by proof-of-concept implementations in Agda and Haskell. Finally, temporal properties of signals and of functions on signals are expressed using techniques from temporal logic, as motivation and justification for a range of optimisations.
74

Verifying requirements for resource-bounded agents

Abdur, Rakib January 2011 (has links)
This thesis presents frameworks for the modelling and verification of resource-bounded reasoning agents. The resources considered include the time, memory, and communication bandwidth required by agents to achieve a goal. The scalability and expressiveness of standard model checking techniques is investigated using two typical multiagent reasoning problems which can be easily parameterised to increase or decrease the problem size. Both a complexity analysis and experimental results suggest that reasonably sized problem instances are unlikely to be tractable for a standard model checker without steps to reduce the branching factor of the state space. We propose two approaches to address this problem: the use of abstract specifications to model the behaviour of some of the agents in the system, and exploiting information about the reasoning strategy adopted by the agents. Abstract specifications are given as Linear Temporal Logic (LTL) formulae which describe the external behaviour of the agents, allowing their temporal behaviour to be compactly modelled. Conversely, reasoning strategies allow the detailed specification of the ordering of steps in the agent’s reasoning process. Both approaches have been combined in an automated verification tool TVRBA for rule-based multi-agent systems which allows the designer to specify information about agents’ interaction, behaviour, and execution strategy at different levels of abstraction. The TVRBA tool generates an encoding of the system for the Maude LTL model checker, allowing properties of the system to be verified. The scalability of the new approach is illustrated using three case studies.
75

A framework for interactive end-user web automation

Eliwa, Essam January 2013 (has links)
This research investigates the feasibility and usefulness of a Web-based model for end-user Web automation. The aim is to empower end users to automate their Web interactions. Web automation is defined here as the study of theoretical and practical techniques for applying an end-user programming model to enable the automation of Web tasks, activities, or interactions. To date, few tools address the issue of Web automation; moreover, their functionality and usage are limited. A novel model is presented, which combines end-user programming techniques and the software tools philosophy with the vision of the “Web as a platform.” The model provided a Web-based environment that enables the rapid creation of efficient and useful Web-oriented automation tools. It consists of a command line for the Web, a shell scripting language, and a repository of Web commands. A framework called Web2Sh (Web 2.0 Shell) has been implemented, which includes the design and implementation of scripting language (WSh) enabling end users to create and customise Web commands. A number of Web2Sh-core Web commands were implemented. There are two techniques for extending the system: developers can implement new core Web commands, and the use of WSh by end users to connect, customise, and parameterise Web commands to create new commands. The feasibility and the usefulness of the proposed model have been demonstrated by implementing several automation scripts using Web2Sh, and by a case study based experiment that was carried out by volunteered participants. The implemented Web2Sh framework provided a novel and realistic environment for creating, customising, and running Web-oriented automation tools.
76

Turn it this way : remote gesturing in video-mediated communication

Kirk, David Stanley January 2007 (has links)
Collaborative physical tasks are working tasks characterised by workers 'in-the-field' who manipulate task artefacts under the guidance of a remote expert. Examples of such interactions include paramedics requiring field-surgery consults from hospital surgeons, soldiers requiring support from distant bomb-disposal experts, technicians inspecting and repairing machinery under the guidance of a chief engineer or scientists examining artefacts with distributed colleagues. This thesis considers the design of technology to support such forms of distributed working. Early research in video-mediated communication (VMC) which sought to support such interactions presumed video links between remote spaces would improve collaboration. The results of these studies however, demonstrated that in such tasks audio-video links alone were unlikely to improve performance beyond that achievable by simpler audio-only links. In explanation of these observations a reading of studies of situated collaborative working practices suggests that to support distributed object-focussed interactions it is beneficial to not only provide visual access to remote spaces but also to present within the task-space the gestural actions of remote collaborators. Remote Gestural Simulacra are advanced video-mediated communication tools that enable remote collaborators to both see and observably point at and gesture around and towards shared task artefacts located at another site. Technologies developed to support such activities have been critiqued; their design often fractures the interaction between the collaborating parties, restricting access to aspects of communication which are commonly used in co-present situations to coordinate interaction and ground understanding. This thesis specifically explores the design of remote gesture tools, seeking to understand how remote representations of gesture can be used during collaborative physical tasks. In a series of lab-based studies, the utility of remote gesturing is investigated, both qualitatively, examining its collaborative function and quantitatively exploring its impact on both facets of task performance and collaborative language. The thesis also discusses how the configuration of remote gesture tools impacts on their usability, empirically comparing various gesture tool designs. The thesis constructs and examines an argument that remote gesture tools should be designed from a 'mixed ecologies' perspective (theoretically alleviating the problems engendered by 'fractured ecologies' in which collaborating partners are given access to the most salient and relevant features of communicative action that are utilised in face-to-face interaction, namely mutual and reciprocal awareness of commonly understood object-focussed actions (hand-based gestures) and mutual and reciprocal awareness of task-space perspectives. The thesis demonstrates experimental support for this position and concludes by presenting discussion of how the findings generated from the thesis research can be used to guide the design of future iterations of remote gesture tools, and presents directions for areas of further research.
77

Visual demand evaluation methods for in-vehicle interfaces

Pettitt, Michael Andrew January 2008 (has links)
Advancements in computing technology have been keenly felt in the automotive industry. Novel in-car systems have the potential to substantially improve the safety, efficiency and comfort of the driving experience. However, they must be carefully designed, so their use does not dangerously distract drivers from fundamental, safety-critical driving tasks. Distraction is a well-established causal factor in road accidents. A concern is that the introduction of new in-vehicle technology may increase exposure to distraction, and lead to an increase in distraction-related accidents. The range of systems often termed In-Vehicle Information Systems (IVIS), encompassing navigation and entertainment systems, in-car email and Internet, are the focus of this thesis, since they are commonly associated with long tasks that are not considered fundamentally relevant to driving. A variety of Human-Computer Interaction (HCI) and Human Factors methods has been employed to assess the potential distraction of IVIS task engagement. These include on-road evaluations, driving simulator studies, and surrogate methods, such as peripheral detection tasks and static task time assessments. The occlusion technique is one such surrogate, where task performance is assessed under intermittent vision conditions. Participants complete a task with 1.5-second vision periods, followed by a period where their vision is occluded. In this way, the technique evaluates how visually demanding a task is, mimicking the behaviour of glancing to and from the forward road scene when driving and performing IVIS tasks. An evaluation of the technique's validity is presented. Sixteen participants performed two tasks on two systems under three conditions: static (full-vision), static (occlusion), and, whilst driving. Results confirmed other research, concluding that the technique is valid. However, the method's assessment through user-trials based on measures of human performance is problematic. Such trials require robust, reliable prototype systems, and can therefore only take place in later design stages. Consequently, the economic effectiveness of the technique is questionable. The keystroke-level model (KLM), which predicts task times for error-free performance by expert users in routine tasks, provides an alternative quantitative assessment method to user-trials. Tasks are decomposed into their most primitive actions, termed operators, which are associated with empirically assessed time values. These values are then summed to predict performance times. An evaluation of the technique in a vehicle environment is presented; twelve participants performed eleven tasks on two in-car entertainment systems, and task times were compared with KLM predictions. Results demonstrate the technique remains valid beyond its original, desktop computing based context. However, the traditional KLM predicts static task time only, and an extended procedure is required to consider occluded task performance. Two studies are presented, seeking to extend the KLM in order to model task performance under the interrupted vision conditions of occlusion trials. In the first, predictions of occlusion metrics are compared with results from the earlier occlusion assessment. In the second, twelve participants performed three tasks on two IVIS systems under occlusion conditions. Results were subsequently compared with predicted values. Both studies conclude that the extended KLM approach produces valid predictions of occlusion methods, with error rates generally within 20% of observed values, in line with expectations for KLM predictions. Subsequently, a case study is presented, to demonstrate the technique's reliability. The results of an independent occlusion study of two IVIS tasks are compared with predictions made by a HCI expert trained in the application of the extended KLM. Error rates for this study were equally low, leading to the conclusion that the extended KLM appears reliable, though further studies are required. It is concluded that the extended-KLM technique is a valid, reliable and economical method for assessing the visual demand of IVIS tasks. In contrast to many user-trial methods, the technique can be applied in early design stages. In addition, future work areas are identified, which could serve to further enhance the validity, reliability and economy of the technique. These include, automating the extended KLM procedure with a software tool, and, the development of new cognitive and physical operators, and new assumptions, specific to IVIS and/or occlusion conditions. For example, it will be useful to develop new cognitive operators that consider the time taken to visually reorient to complex displays following occluded periods.
78

The data integrity problem and multi-layered document integrity

Moss, Ben January 2007 (has links)
Data integrity is a fundamental aspect of computer security that has attracted much interest in recent decades. Despite a general consensus for the meaning of the problem, the lack of a formal definition has led to spurious claims such as "tamper proof", "prevent tampering", and "tamper protection", which are all misleading in the absence of a formal definition. Ashman recently proposed a new approach for protecting the integrity of a document that claims the ability to detect, locate, and correct tampering. If determining integrity is only part of the problem, then a more general notion of data integrity is needed. Furthermore, in the presence of a persistent tamperer, the problem is more concerned with maintaining and proving the integrity of data, rather than determining it. This thesis introduces a formal model for the more general notion of data integrity by providing a formal problem semantics for its sub-problems: detection, location, correction, and prevention. The model is used to reason about the structure of the data integrity problem and to prove some fundamental results concerning the security and existence of schemes that attempt to solve these sub-problems. Ashman's original multi-layered document integrity (MLDI) paper [1] is critically evaluated, and several issues are highlighted. These issues are investigated in detail, and a series of algorithms are developed to present the MLDI schemes. Several factors that determine the feasibility of Ashman's approach are identified in order to prove certain theoretical results concerning the efficacy of MLDI schemes.
79

Practical mobile ad hoc networks for large scale cattle monitoring

Wietrzyk, Bartosz January 2008 (has links)
This thesis is concerned with identification of realistic requirements for the cattle monitoring system and design of the practical architecture addressing these requirements. Automated monitoring of cattle with wireless monitoring devices mounted on the animals can increase efficiency of cattle production, decrease its reliance on human labour and thus increase its profitability. Multi-hop ad hoc wireless communication has the potential to increase battery life of the animal mounted devices, decrease their size and combat disconnections. This thesis reveals that no current approach sufficiently addresses energy constrains of the animal mounted devices and potential disconnections. We propose a delay tolerant store and forward architecture that provides data retention, detecting custom events, issues notifications, answers remote and in-situ queries, based on requirements identified during field experiments we conducted. This architecture utilizes fixed infrastructure but also works in ad hoc infrastructureless conditions. The core of the proposed architecture, Mobile Ad Hoc Network (MANET) communication, provides offloading data for long term storage by sending data to farm servers via sinks that are a part of MANET and handles in-situ queries issued by users collocated with the animals. The proposed MANET routing algorithm addresses high mobility of nodes and disconnections. It provides lower and more balanced energy usage, shorter delays and increased success ratio of delivering answers to in-situ queries than more generic existing approaches. Problems of large scale deployment of the envisaged system are also addressed. We discuss the necessary configuration process performed during the system installation as well as pervasive mobile and home access to the target system. We propose cost efficient strategies for sinks installation and connecting sinks to farm servers, adaptive to different requirements, estates layout, available infrastructure and existing human and vehicle mobility. We also propose a cost efficient security model for the target system based on public key cryptography.
80

Modelling tools and methodologies for rapid protocell prototyping

Smaldon, James January 2011 (has links)
The field of unconventional computing considers the possibility of implementing computational devices using novel paradigms and materials to produce computers which may be more efficient, adaptable and robust than their silicon based counterparts. The integration of computation into the realms of chemistry and biology will allow the embedding of engineered logic into living systems and could produce truly ubiquitous computing devices. Recently, advances in synthetic biology have resulted in the modification of microorganism genomes to create computational behaviour in living cells, so called “cellular computing”. The cellular computing paradigm offers the possibility of intelligent bacterial agents which may respond and communicate with one another according to chemical signals received from the environment. However, the high levels of complexity when altering an organism which has been well adapted to certain environments over millions of years of evolution suggests an alternative approach in which chemical computational devices can be constructed completely from the bottom up, allowing the designer exquisite control and knowledge about the system being created. This thesis presents the development of a simulation and modelling framework to aid the study and design of bottom-up chemical computers, involving the encapsulation of computational re-actions within vesicles. The new “vesicle computing” paradigm is investigated using a sophisticated multi-scale simulation framework, developed from mesoscale, macroscale and executable biology techniques.

Page generated in 0.0326 seconds