• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • No language data
  • Tagged with
  • 487
  • 487
  • 487
  • 171
  • 157
  • 155
  • 155
  • 68
  • 57
  • 48
  • 33
  • 29
  • 25
  • 25
  • 25
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
1

Exploring the memorability of multiple recognition-based graphical passwords and their resistance to guessability attacks

Chowdhury, Soumyadeb January 2015 (has links)
Most users find it difficult to remember traditional text-based passwords. In order to cope with multiple passwords, users tend to adopt unsafe mechanisms like writing down the passwords or sharing them with others. Recognition-based graphical authentication systems (RBGSs) have been proposed as one potential solution to minimize the above problems. But, most prior works in the field of RBGSs make the unrealistic assumption of studying a single password. It is also an untested assumption that RBGS passwords are resistant to being written down or verbally communicated. The main aim of the research reported in this thesis is to examine the memorability of multiple image passwords and their guessability using written descriptions (provided by the respective account holders). In this context, the thesis presents four user studies. The first user study (US1) examined the usability of multiple RBGS passwords with four different image types: Mikon, doodle, art and everyday objects (e.g. images of food, buildings, sports etc.). The results obtained in US1 demonstrated that subjects found it difficult to remember four RBGS passwords (of the same image type) and the memorability of the passwords deteriorated over time. The results of another usability study (US2) conducted using the same four image types (as in US1) demonstrated that the memorability of the multiple RBGS passwords created by employing a mnemonic strategy do not improve even when compared to the existing multiple password studies and US1. In the context of the guessability, a user study (GS1) examined the guessability of RBGS passwords (created in US1), using the textual descriptions given by the respective account holders. Another study (GS2) examined the guessability of RBGS passwords (created in US2), using descriptions given by the respective account holders. The results obtained from both the studies showed that RBGS passwords can be guessed using the password descriptions in the experimental set-up used. Additionally, this thesis presents a novel Passhint authentication system (PHAS).The results of a usability study (US3) demonstrated that the memorability of multiple PHAS passwords is better than in existing Graphical authentication systems (GASs). Although the registration time is high, authentication time for the successful attempts is either equivalent to or less than the time reported for previous GASs. The guessability study (GS3) showed that the art passwords are the least guessable, followed by Mikon, doodle and objects in that order. This thesis offers these initial studies as a proof of principle to conduct large scale field studies in the future with PHAS. Based on the review of the existing literature, this thesis identifies the need for a general set of principles to design usability experiments that would allow systematic evaluation and comparison of different authentication systems. From the empirical studies (US1, US2 and US3) reported in this thesis, we found that multiple RBGS passwords are difficult to remember, and the memorability of such passwords can be increased using the novel PHAS. We also recommend using the art images as the passwords in PHAS, because they are found to be the least guessable using the written descriptions in the empirical studies (GS1, GS2 and GS3) reported in this thesis.
2

Investigation into an improved modular rule-based testing framework for business rules

Wetherall, Jodie January 2010 (has links)
Rule testing in scheduling applications is a complex and potentially costly business problem. This thesis reports the outcome of research undertaken to develop a system to describe and test scheduling rules against a set of scheduling data. The overall intention of the research was to reduce commercial scheduling costs by minimizing human domain expert interaction within the scheduling process. This thesis reports the outcome of research initiated following a consultancy project to develop a system to test driver schedules against the legal driving rules in force in the UK and the EU. One of the greatest challenges faced was interpreting the driving rules and translating them into the chosen programming language. This part of the project took considerable effort to complete the programming, testing and debugging processes. A potential problem then arises if the Department of Transport or the European Union alter or change the driving rules. Considerable software development is likely to be required to support the new rule set. The approach considered takes into account the need for a modular software component that can be used in not just transport scheduling systems which look at legal driving rules but may also be integrated into other systems that have the need to test temporal rules. The integration of the rule testing component into existing systems is key to making the proposed solution reusable. The research outcome proposes an alternative approach to rule definition, similar to that of RuleML, but with the addition of rule metadata to provide the ability of describing rules of a temporal nature. The rules can be serialised and deserialised between XML (eXtensible Markup Language) and objects within an object oriented environment (in this case .NET with C#), to provide a means of transmission of the rules over a communication infrastructure. The rule objects can then be compiled into an executable software library, allowing the rules to be tested more rapidly than traditional interpreted rules. Additional support functionality is also defined to provide a means of effectively integrating the rule testing engine into existing applications. Following the construction of a rule testing engine that has been designed to meet the given requirements, a series of tests were undertaken to determine the effectiveness of the proposed approach. This lead to the implementation of improvements in the caching of constructed work plans to further improve performance. Tests were also carried out into the application of the proposed solution within alternative scheduling domains and to analyse the difference in computational performance and memory usage across system architectures, software frameworks and operating systems, with the support of Mono. Future work that is expected to follow on from this thesis will likely reside in investigations into the development of graphical design tools for the creation of the rules, improvements in the work plan construction algorithm, parallelisation of elements of the process to take better advantage of multi-core processors and off-loading of the rule testing process onto dedicated or generic computational processors.
3

Open or closed? : the politics of software licensing in Argentina and Brazil

Jones, Ivor January 2015 (has links)
Whether software is licensed under terms which ‘close off’ or make accessible the underlying code that comprises software holds profound implications for development due to the centrality of this good to contemporary life. In the 2000s, developing countries adopted policies to promote free and open source software (F/OSS) for reasons of technological autonomy and to reduce spending on royalties for foreign produced proprietary software. However, the adoption of such policies varied across countries. Focusing upon Argentina and Brazil, two countries that reflect contrasting policy outcomes in promoting F/OSS, I explain why and how different policies came to be adopted by analysing the way in which institutions and patterns of association affected the lobbying power of advocates and opponents of F/OSS promotion. Advocates are generally weak actors, yet they might strengthen their lobbying power through embeddedness within political and state institutions which offer opportunities to mobilise resources and forge ties with political decision-makers. Opponents are generally strong, business actors, yet their lobbying power may be attenuated by weak concentration in business association, reducing their capacity to mobilise and coordinate support. In Argentina, where F/OSS advocates’ institutional embeddedness was weak and concentration in business association was strong, the government was prevented from promoting F/OSS, despite signs that it wished to do so. In Brazil, where F/OSS advocates’ institutional embeddedness was strong and concentration in business association was weak, the government promoted F/OSS despite vociferous opposition from amongst the largest firms in the world.
4

Scalable audio processing across heterogeneous distributed resources : an investigation into distributed audio processing for Music Information Retrieval

Al-Shakarchi, Ahmad January 2013 (has links)
Audio analysis algorithms and frameworks for Music Information Retrieval (MIR) are expanding rapidly, providing new ways to discover non-trivial information from audio sources, beyond that which can be ascertained from unreliable metadata such as ID3 tags. MIR is a broad field and many aspects of the algorithms and analysis components that are used are more accurate given a larger dataset for analysis, and often require extensive computational resources. This thesis investigates if, through the use of modern distributed computing techniques, it is possible to design an MIR system that is scalable as the number of participants increases, which adheres to copyright laws and restrictions, whilst at the same time enabling access to a global database of music for MIR applications and research. A scalable platform for MIR analysis would be of benefit to the MIR and scientific community as a whole. A distributed MIR platform that encompasses the creation of MIR algorithms and workflows, their distribution, results collection and analysis, is presented in this thesis. The framework, called DART - Distributed Audio Retrieval using Triana - is designed to facilitate the submission of MIR algorithms and computational tasks against either remotely held music and audio content, or audio provided and distributed by the MIR researcher. Initially a detailed distributed DART architecture is presented, along with simulations to evaluate the validity and scalability of the architecture. The idea of a parameter sweep experiment to find the optimal parameters of the Sub-Harmonic Summation (SHS) algorithm is presented, in order to test the platform and use it to perform useful and real-world experiments that contribute new knowledge to the field. DART is tested on various pre-existing distributed computing platforms and the feasibility of creating a scalable infrastructure for workflow distribution is investigated throughout the thesis, along with the different workflow distribution platforms that could be integrated into the system. The DART parameter sweep experiments begin on a small scale, working up towards the goal of running experiments on thousands of nodes, in order to truly evaluate the scalability of the DART system. The result of this research is a functional and scalable distributed MIR research platform that is capable of performing real world MIR analysis, as demonstrated by the successful completion of several large scale SHS parameter sweep experiments across a variety of different input data - using various distribution methods - and through finding the optimal parameters of the implemented SHS algorithm. DART is shown to be highly adaptable both in terms of the distributed MIR analysis algorithm, as well as the distribution
5

Performance engineering of hybrid message passing + shared memory programming on multi-core clusters

Chorley, Martin James January 2012 (has links)
The hybrid message passing + shared memory programming model combines two parallel programming styles within the same application in an effort to improve the performance and efficiency of parallel codes on modern multi-core clusters. This thesis presents a performance study of this model as it applies to two Molecular Dynamics (MD) applications. Both a large scale production MD code and a smaller scale example MD code have been adapted from existing message passing versions by adding shared memory parallelism to create hybrid message passing + shared memory applications. The performance of these hybrid applications has been investigated on different multi-core clusters and compared with the original pure message passing codes. This performance analysis reveals that the hybrid message passing + shared memory model provides performance improvements under some conditions, while the pure message passing model provides better performance in others. Typically, when running on small numbers of cores the pure message passing model provides better performance than the hybrid message passing + shared memory model, as hybrid performance suffers due to increased overheads from the use of shared memory constructs. However, when running on large numbers of cores the hybrid model performs better as these shared memory overheads are minimised while the pure message passing code suffers from increased communication overhead. These results depend on the interconnect used. Hybrid message passing + shared memory molecular dynamics codes are shown to exhibit different communication profiles from their pure message passing versions and this is revealed to be a large factor in the performance difference between pure message passing and hybrid message passing + shared memory codes. An extension of this result shows that the choice of interconnection fabric used in a multi-core cluster has a large impact on the performance difference between the pure message passing and the hybrid code. The factors affecting the performance of the applications have been analytically examined in an effort to describe, generalise and predict the performance of both the pure message passing and hybrid message passing + shared memory codes.
6

Web page performance analysis

Chiew, Thiam Kian January 2009 (has links)
Computer systems play an increasingly crucial and ubiquitous role in human endeavour by carrying out or facilitating tasks and providing information and services. How much work these systems can accomplish, within a certain amount of time, using a certain amount of resources, characterises the systems’ performance, which is a major concern when the systems are planned, designed, implemented, deployed, and evolve. As one of the most popular computer systems, the Web is inevitably scrutinised in terms of performance analysis that deals with its speed, capacity, resource utilisation, and availability. Performance analyses for the Web are normally done from the perspective of the Web servers and the underlying network (the Internet). This research, on the other hand, approaches Web performance analysis from the perspective of Web pages. The performance metric of interest here is response time. Response time is studied as an attribute of Web pages, instead of being considered purely a result of network and server conditions. A framework that consists of measurement, modelling, and monitoring (3Ms) of Web pages that revolves around response time is adopted to support the performance analysis activity. The measurement module enables Web page response time to be measured and is used to support the modelling module, which in turn provides references for the monitoring module. The monitoring module estimates response time. The three modules are used in the software development lifecycle to ensure that developed Web pages deliver at worst satisfactory response time (within a maximum acceptable time), or preferably much better response time, thereby maximising the efficiency of the pages. The framework proposes a systematic way to understand response time as it is related to specific characteristics of Web pages and explains how individual Web page response time can be examined and improved.
7

The impact of localized road accident information on road safety awareness

Zheng, Yunan January 2007 (has links)
The World Health Organization (WHO) estimate that road traffic accidents represent the third leading cause of ‘death and disease’ worldwide. Many countries have, therefore, launched safety campaigns that are intended to reduce road traffic accidents by increasing public awareness. In almost every case, however, a reduction in the total number of fatalities has not been matched by a comparable fall in the total frequency of road traffic accidents. Low severity incidents remain a significant problem. One possible explanation is that these road safety campaigns have had less effect than design changes. Active safety devices such as anti-lock braking, and passive measures, such as side impact protection, serve to mitigate the consequences of those accidents that do occur. A number of psychological phenomena, such as attribution error, explain the mixed success of road safety campaigns. Most drivers believe that they are less likely to be involved in an accident than other motorists. Existing road safety campaigns do little to address this problem; they focus on national and regional statistics that often seem remote from the local experiences of road users. Our argument is that localized road accident information would have better impact on people’s safety awareness. This thesis, therefore, describes the design and development of a software tool to provide the general public with access to information on the location and circumstances of road accidents in a Scottish city. We also present the results of an evaluation to determine whether the information provided by this software has any impact on individual risk perception. A route planing experiment was also carried out. The results from the experiment gives more positive feedback that road users would consider accident information if such information was available for them.
8

Analysing accident reports using structured and formal methods

Burns, Colin Paul January 2000 (has links)
Formal methods are proposed as a means to improve accident reports, such as the report into the 1996 fire in the Channel Tunnel between the UK and France. The size and complexity of accident reports create difficulties for formal methods, which traditionally suffer from problems of scalability and poor readability. This thesis demonstrates that features of an engineering-style formal modelling process, particularly the structuring of activity and management of information, reduce the impact of these problems and improve the accuracy of formal models of accident reports. This thesis also contributes a detailed analysis of the methodological requirements for constructing accident report models. Structured, methodical construction and mathematical analysis of the models elicits significant problems in the content and argumentation of the reports. Once elicited, these problems can be addressed. This thesis demonstrates the benefits and limitations of taking a wider scope in the modelling process than is commonly adopted for formal accident analysis. We present a deontic action logic as a language for constructing models of accident reports. Deontic action models offer a novel view of the report, which highlights both the expected and actual behaviour in the report, and facilitates examination of the conflict between the two. This thesis contributes an objective analysis of the utility of both deontic and action logic operators to the application of modelling accident reports. A tool is also presented that executes a subset of the logic, including these deontic and action logic operators.
9

Embedding expert systems in semi-formal domains : examining the boundaries of the knowledge base

Whitley, Edgar A. January 1990 (has links)
This thesis examines the use of expert systems in semi-formal domains. The research identifies the main problems with semi-formal domains and proposes and evaluates a number of different solutions to them. The thesis considers the traditional approach to developing expert systems, which sees domains as being formal, and notes that it continuously faces problems that result from informal features of the problem domain. To circumvent these difficulties experience or other subjective qualities are often used but they are not supported by the traditional approach to design. The thesis examines the formal approach and compares it with a semiformal approach to designing expert systems which is heavily influenced by the socio-technical view of information systems. From this basis it examines a number of problems that limit the construction and use of knowledge bases in semi-formal domains. These limitations arise from the nature of the problem being tackled, in particular problems of natural language communication and tacit knowledge and also from the character of computer technology and the role it plays. The thesis explores the possible mismatch between a human user and the machine and models the various types of confusion that arise. The thesis describes a number of practical solutions to overcome the problems identified. These solutions are implemented in an expert system shell (PESYS), developed as part of the research. The resulting solutions, based on non-linear documents and other software tools that open up the reasoning of the system, support users of expert systems in examining the boundaries of the knowledge base to help them avoid and overcome any confusion that has arisen. In this way users are encouraged to use their own skills and experiences in conjunction with an expert system to successfully exploit this technology in semi-formal domains.
10

Flexible physical interfaces

Villar, Nicolas January 2007 (has links)
Human-computer interface devices are rigid, and afford little or no opportunity for end-user adaptation. This thesis proposes that valuable new interaction possibilities can be generated through the development of user interface hardware that is increasingly flexible, and allows end-users to physically shape, construct and modify physical interfaces for interactive systems. The work is centred around the development of a novel platform for flexible user interfaces (called VoodooIO) that allows end-users to compose and adapt physical control structures in a manner that is both versatile and simple to use. VoodooIO has two main physical elements: a pliable material (called the substrate), and a set of physical user interface controls, which can be arranged on the surface of the substrate.The substrate can be shaped, applied to existing surfaces, attached to objects and placed on walls and furniture to designate interface areas on which users can spatially lay out controls. From a technical perspective, the design of VoodooIO is based on a novel architecture for user interfaces as networks of controls, where each control is implemented as a network node with physical input and output capabilities. The architecture overcomes the inflexibility that is usually imposed by hard-wired circuitry in traditional interface devices, by enabling individual control elements that can be connected and disconnected ad hoc from a shared network bus. The architecture includes support for a wide and extensible range of control types; fast control identification and presence detection, and an application-level interface that abstracts from low level implementation details and network management processes. The concrete contributions to the field of human-computer interaction include a motivation for the development of flexible physical interfaces, a fully working example of such a technology, and insights gathered from its application and study.

Page generated in 0.1314 seconds