• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • No language data
  • Tagged with
  • 487
  • 487
  • 487
  • 171
  • 157
  • 155
  • 155
  • 68
  • 57
  • 48
  • 33
  • 29
  • 25
  • 25
  • 25
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
41

Exploring the memorability of multiple recognition-based graphical passwords and their resistance to guessability attacks

Chowdhury, Soumyadeb January 2015 (has links)
Most users find it difficult to remember traditional text-based passwords. In order to cope with multiple passwords, users tend to adopt unsafe mechanisms like writing down the passwords or sharing them with others. Recognition-based graphical authentication systems (RBGSs) have been proposed as one potential solution to minimize the above problems. But, most prior works in the field of RBGSs make the unrealistic assumption of studying a single password. It is also an untested assumption that RBGS passwords are resistant to being written down or verbally communicated. The main aim of the research reported in this thesis is to examine the memorability of multiple image passwords and their guessability using written descriptions (provided by the respective account holders). In this context, the thesis presents four user studies. The first user study (US1) examined the usability of multiple RBGS passwords with four different image types: Mikon, doodle, art and everyday objects (e.g. images of food, buildings, sports etc.). The results obtained in US1 demonstrated that subjects found it difficult to remember four RBGS passwords (of the same image type) and the memorability of the passwords deteriorated over time. The results of another usability study (US2) conducted using the same four image types (as in US1) demonstrated that the memorability of the multiple RBGS passwords created by employing a mnemonic strategy do not improve even when compared to the existing multiple password studies and US1. In the context of the guessability, a user study (GS1) examined the guessability of RBGS passwords (created in US1), using the textual descriptions given by the respective account holders. Another study (GS2) examined the guessability of RBGS passwords (created in US2), using descriptions given by the respective account holders. The results obtained from both the studies showed that RBGS passwords can be guessed using the password descriptions in the experimental set-up used. Additionally, this thesis presents a novel Passhint authentication system (PHAS).The results of a usability study (US3) demonstrated that the memorability of multiple PHAS passwords is better than in existing Graphical authentication systems (GASs). Although the registration time is high, authentication time for the successful attempts is either equivalent to or less than the time reported for previous GASs. The guessability study (GS3) showed that the art passwords are the least guessable, followed by Mikon, doodle and objects in that order. This thesis offers these initial studies as a proof of principle to conduct large scale field studies in the future with PHAS. Based on the review of the existing literature, this thesis identifies the need for a general set of principles to design usability experiments that would allow systematic evaluation and comparison of different authentication systems. From the empirical studies (US1, US2 and US3) reported in this thesis, we found that multiple RBGS passwords are difficult to remember, and the memorability of such passwords can be increased using the novel PHAS. We also recommend using the art images as the passwords in PHAS, because they are found to be the least guessable using the written descriptions in the empirical studies (GS1, GS2 and GS3) reported in this thesis.
42

Open or closed? : the politics of software licensing in Argentina and Brazil

Jones, Ivor January 2015 (has links)
Whether software is licensed under terms which ‘close off’ or make accessible the underlying code that comprises software holds profound implications for development due to the centrality of this good to contemporary life. In the 2000s, developing countries adopted policies to promote free and open source software (F/OSS) for reasons of technological autonomy and to reduce spending on royalties for foreign produced proprietary software. However, the adoption of such policies varied across countries. Focusing upon Argentina and Brazil, two countries that reflect contrasting policy outcomes in promoting F/OSS, I explain why and how different policies came to be adopted by analysing the way in which institutions and patterns of association affected the lobbying power of advocates and opponents of F/OSS promotion. Advocates are generally weak actors, yet they might strengthen their lobbying power through embeddedness within political and state institutions which offer opportunities to mobilise resources and forge ties with political decision-makers. Opponents are generally strong, business actors, yet their lobbying power may be attenuated by weak concentration in business association, reducing their capacity to mobilise and coordinate support. In Argentina, where F/OSS advocates’ institutional embeddedness was weak and concentration in business association was strong, the government was prevented from promoting F/OSS, despite signs that it wished to do so. In Brazil, where F/OSS advocates’ institutional embeddedness was strong and concentration in business association was weak, the government promoted F/OSS despite vociferous opposition from amongst the largest firms in the world.
43

Investigation into an improved modular rule-based testing framework for business rules

Wetherall, Jodie January 2010 (has links)
Rule testing in scheduling applications is a complex and potentially costly business problem. This thesis reports the outcome of research undertaken to develop a system to describe and test scheduling rules against a set of scheduling data. The overall intention of the research was to reduce commercial scheduling costs by minimizing human domain expert interaction within the scheduling process. This thesis reports the outcome of research initiated following a consultancy project to develop a system to test driver schedules against the legal driving rules in force in the UK and the EU. One of the greatest challenges faced was interpreting the driving rules and translating them into the chosen programming language. This part of the project took considerable effort to complete the programming, testing and debugging processes. A potential problem then arises if the Department of Transport or the European Union alter or change the driving rules. Considerable software development is likely to be required to support the new rule set. The approach considered takes into account the need for a modular software component that can be used in not just transport scheduling systems which look at legal driving rules but may also be integrated into other systems that have the need to test temporal rules. The integration of the rule testing component into existing systems is key to making the proposed solution reusable. The research outcome proposes an alternative approach to rule definition, similar to that of RuleML, but with the addition of rule metadata to provide the ability of describing rules of a temporal nature. The rules can be serialised and deserialised between XML (eXtensible Markup Language) and objects within an object oriented environment (in this case .NET with C#), to provide a means of transmission of the rules over a communication infrastructure. The rule objects can then be compiled into an executable software library, allowing the rules to be tested more rapidly than traditional interpreted rules. Additional support functionality is also defined to provide a means of effectively integrating the rule testing engine into existing applications. Following the construction of a rule testing engine that has been designed to meet the given requirements, a series of tests were undertaken to determine the effectiveness of the proposed approach. This lead to the implementation of improvements in the caching of constructed work plans to further improve performance. Tests were also carried out into the application of the proposed solution within alternative scheduling domains and to analyse the difference in computational performance and memory usage across system architectures, software frameworks and operating systems, with the support of Mono. Future work that is expected to follow on from this thesis will likely reside in investigations into the development of graphical design tools for the creation of the rules, improvements in the work plan construction algorithm, parallelisation of elements of the process to take better advantage of multi-core processors and off-loading of the rule testing process onto dedicated or generic computational processors.
44

Measuring the business success of enterprise systems projects

Jones, Richard January 2016 (has links)
Enterprise resource planning (ERP) systems are integrated application software packages that meet most of the information systems requirements of business organisations. ERP, or more simply enterprise systems (ES), have constituted the majority of investment in information technology by global businesses over the last two decades and have had a profound impact upon the way these businesses have been managed. Yet there is not a good understanding of how the business success, as opposed to the implementation project success, of enterprise systems projects can be evaluated. Of the two success concepts, extant literature places more emphasis upon project success rather than business success. This research is directed at exploring the relationship between planned business success, generally included in ERP project business cases, and subsequent, empirical, post-implementation measures of business success. The study involved the interviewing of 20 key informants from both ERP adopting companies and ERP consulting firms to answer the research question of ‘how do businesses evaluate the business success, as opposed to the project implementation success, of enterprise systems?’ Using 10 a priori categories derived from the literature, 100 correlated categories were identified from interview data by use of a three stage coding process; 25 categories were selected from this larger group to identify relationships that were the most pertinent to the central research question. The key findings of the research were that the strength of the ERP system business case was generally determined by three main categories of business driver; strategic business change, a lower cost business model and business survival. These categories of business driver then determined the criteria for business success applied to the project in post-implementation stages. Where lower cost business models, often involving shared service centres and outsourcing of these centralised functions, were the driver, the business case metrics were more likely to be used for measurement of business success. Otherwise there was generally either a dissociation of benefits estimates in business cases from subsequent success measurement or simply an absence of estimated benefits. This framework for the evaluation of the business success of enterprise systems has advantages over the delivery of estimated, a priori, business benefits because: (1) The assumptions underlying the initial estimates of benefits will generally be invalidated because of the changed business environment prevailing after the lengthy implementation of a systems project. This makes comparisons with empirical post-implementation measures of business success of reduced value. Further, measures of business success based upon delivered benefits assume a degree of causality between the new ERP system and business benefits. However, it is often difficult to disentangle benefits from new business processes enabled by the enterprise system from benefits derived from other business initiatives. (2) Actual, realised business benefits of a new IT system are often not measured for organisational and behavioural reasons. For example, there may be a lack of continuity of project stakeholders over the implementation period. Or more simply, people are reluctant to study what are viewed as past and irreversible events. (3) A final factor is the absence of accounting or other measurement systems to evaluate actual benefits, often the result of the replacement of legacy accounting systems used to estimate the initial planned benefits. This research also adds considerably to current literature on the implementation of enterprise systems, which has generally studied project success rather than business success because of the relative ease of measurement of project implementation success.
45

Improving reliability of service oriented systems with consideration of cost and time constraints in clouds

Guo, Wei January 2016 (has links)
Web service technology is more and more popular for the implementation of service oriented systems. Additionally, cloud computing platforms, as an efficient and available environment, can provide the computing, networking and storage resources in order to decrease the budget of companies to deploy and manage their systems. Therefore, more service oriented systems are migrated and deployed in clouds. However, these applications need to be improved in terms of reliability, for certain components have low reliability. Fault tolerance approaches can improve software reliability. However, more redundant units are required, which increases the cost and the execution time of the entire system. Therefore, a migration and deployment framework with fault tolerance approaches with the consideration of global constraints in terms of cost and execution time may be needed. This work proposes a migration and deployment framework to guide the designers of service oriented systems in order to improve the reliability under global constraints in clouds. A multilevel redundancy allocation model is adopted for the framework to assign redundant units to the structure of systems with fault tolerance approaches. An improved genetic algorithm is utilised for the generation of the migration plan that takes the execution time of systems and the cost constraints into consideration. Fault tolerant approaches (such as NVP, RB and Parallel) can be integrated into the framework so as to improve the reliability of the components at the bottom level. Additionally, a new encoding mechanism based on linked lists is proposed to improve the performance of the genetic algorithm in order to reduce the movement of redundant units in the model. The experiments compare the performance of encoding mechanisms and the model integrated with different fault tolerance approaches. The empirical studies show that the proposed framework, with a multilevel redundancy allocation model integrated with the fault tolerance approaches, can generate migration plans for service oriented systems in clouds with the consideration of cost and execution time.
46

A quality assessment framework for knowledge management software

Habaragamu Ralalage, Wijendra Peiris Gunathilake January 2016 (has links)
CONTEXT: Knowledge is a strategic asset to any organisation due to its usefulness in supporting innovation, performance improvement and competitive advantage. In order to gain the maximum benefit from knowledge, the effective management of various forms of knowledge is increasingly viewed as vital. A Knowledge Management System (KMS) is a class of Information System (IS) that manages organisational knowledge, and KMS software (KMSS) is a KMS component that can be used as a platform for managing various forms of knowledge. The evaluation of the effectiveness or quality of KMS software is challenging, and no systematic evidence exists on the quality evaluation of knowledge management software which considers the various aspects of Knowledge Management (KM) to ensure the effectiveness of a KMS. AIM: The overall aim is to formalise a quality assessment framework for knowledge management software (KMSS). METHOD: In order to achieve the aim, the research was planned and carried out in the stages identified in the software engineering research methods literature. The need for this research was identified through a mapping study of prior KMS research. The data collected through a Systematic Literature Review (SLR) and the evaluation of a KMSS prototype using a sample of 58 regular users of knowledge management software were used as the main sources of data for the formalisation of the quality assessment framework. A test bed for empirical data collection was designed and implemented based on key principles of learning. A formalised quality assessment framework was applied to select knowledge management software and was evaluated for effectiveness. RESULTS: The final outcome of this research is a quality assessment framework consisting of 41 quality attributes categorised under content quality, platform quality and user satisfaction. A Quality Index was formulated by integrating these three categories of quality attributes to evaluate the quality of knowledge management software. CONCLUSION: This research generates novel contributions by presenting a framework for the quality assessment of knowledge management software, never previously available in the research. This framework is a valuable resource for any organisation or individual in selecting the most suitable knowledge management software by considering the quality attributes of the software.
47

Commitment models and concurrent bilateral negotiation strategies in dynamic service markets

Ponka, Ilja January 2009 (has links)
Technologies such as Web Services, the Semantic Web and the Grid may give rise to new electronic service markets, which have so many services, providers and consumers that software agents need to be employed to search and configure the services for their users. To this end, in this thesis, we investigate bilateral negotiations between agents of service providers and consumers in such markets. Our main interest is in decommitment policies or rules that govern reneging on a commitment, which are essential for operating effectively in such dynamic settings. The work is divided into two main parts. In part I (chapters 3-8), we investigate how the decommitment policies, through the parties’ behaviour, affect the combined utility of all market participants. As a tool, we use decisions that parties make during their interaction. These decisions have previously been discussed in the law and economics literature, but this is the first empirical investigation of them in a dynamic service market setting. We also consider settings (for example, with incomplete information) that have not been addressed before. In particular, we take four of these decisions — performance, reliance, contract and selection — and consider them one by one in a variety of settings. We create a number of novel decommitment policies and increase the understanding of these decisions in electronic markets. In part II (chapters 9-11), we consider a buyer agent that engages in multiple negotiations with different seller agents concurrently and consider how decommitment policies should affect its behaviour. Specifically, we develop a detailed adaptive model for concurrent bilateral negotiation by extending the existing work in several directions. In particular, we pay special attention to choosing the opponents to negotiate with and choosing the number of negotiations to have concurrently, but we also address questions such as bilateral negotiation tactics and interconnected negotiations on different services. These ideas are again evaluated empirically.
48

On the derivation of value from geospatial linked data

Black, Jennifer January 2013 (has links)
Linked Data (LD) is a set of best practices for publishing and connecting structured data on the web. LD and Linked Open Data (LOD) are often con ated to the point where there is an expectation that LD will be free and unrestricted. The current research looks at deriving commercial value from LD. When there is both free and paid for data available the issue arises of how users will react to a situation where two or more options are provided. The current research examines the factors that would affect choices made by users, and subsequently created prototypes for users to interact with, in order to understand how consumers reacted to each of the di�erent options. Our examination of commercial providers of LD uses Ordnance Survey (OS) (the UK national mapping agency) as a case study by studying their requirements for and experiences of publishing LD, and we further extrapolate from this by comparing the OS to other potential commercial publishers of LD. Our research looks at the business case for LD and introduces the concept of LOD and Linked Closed Data (LCD). We also determine that there are two types of LD users; non-commercial users and commercial users and as such, two types of use of LD; LD as a raw commodity and LD as an application. Our experiments aim to identify the issues users would find whereby LD is accessed via an application. Our first investigation brought together technical users and users of Geographic Information (GI). With the idea of LOD and LCD we asked users what factors would affect their view of data quality. We found 3 different types of buying behaviour on the web. We also found that context actively affected the users decision, i.e. users were willing to pay when the data was to make a professional decision but not for leisure use. To enable us to observe the behaviour of consumers whilst using data online, we built a working prototype of a LD application that would enable potential users of the system to experience the data and give us feedback about how they would behave in a LD environment. This was then extended into a second LD application to find if the same principles held true if actual capital was involved and they had to make a conscious decision regarding payment. With this in mind we proposed a potential architecture for the consumption of LD on the web. We determined potential issues which affect a consumers willingness to pay for data which surround quality factors. This supported our hypothesis that context affects a consumers willingness to pay and that willingness to pay is related to a requirement to reduce search times. We also found that a consumers perception of value and criticality of purpose also affected their willingness to pay. Finally we outlined an architecture to enable users to use LD where different scenarios may be involved which may have potential payment restrictions. This work is our contribution to the issue of the business case for LD on the web and is a starting point.
49

Achieving accurate opinion consensus in large multi-agent systems

Pryymak, Oleksandr January 2013 (has links)
Modern communication technologies offer the means to share information within decentralised,large and complex networks of agents. A significant number of examples of peer-to-peer interactions can be found in domains such as sensor networks, social web communities and file-sharing networks. Nevertheless, the development of decentralised systems still presents new challenges for sharing uncertain and con icting information in large communities of agents. In particular, the problem of forming opinion consensus supported by most of the observations distributed in a large system, is still challenging. To date, this problem has been approached from two perspectives: (i) on a system-level, by analysing the complex processes of opinion sharing in order to determine which system parameters result in higher performance; and (ii) from the perspective of individual agents, by designing algorithms for interactively reaching agreements on the correct opinion or for reasoning about the accuracy of a received opinion by its additional annotation. However, both of these approaches have signi�cant weaknesses. The first requires centralised control and perfect knowledge about the configuration of the system in order to simulate it, which are unlikely to be available for large decentralised systems. Whereas, the latter algorithms introduce a significant communication overhead, whilst in many cases the capabilities of the agents are restricted and communication strictly limited. Therefore, there is a need to fill the gap between these two approaches by addressing the problem of improving the accuracy of consensus in a decentralised fashion with minimal communication expenses. With this motivation, in this thesis we focus on the problem of improving the accuracy of consensus in large, complex networks of agents. We consider challenging settings in which communication is strictly limited to the sharing of opinions, which are subjective statements about the correct state of the subject of common interest. These opinions are dynamically introduced by a small number of sensing agents which have low accuracy, and thus the correct opinion just slightly prevails in the readings. In order to form the accurate consensus, the agents have to aggregate opinions from a number of sensing agents which, however, they are very rarely in direct connection with. Against this background, we focus on improving the accuracy of consensus and develop a solution for decentralised opinion aggregation. We build our work on recent research which suggests that large networked systems exhibit a mode of collective behaviour in which the accuracy is improved. We extend this research and offer a novel opinion sharing model, which is the firrst to quantify the impact of collective behaviour on the accuracy of consensus. By investigating the properties of our model, we show that within a narrow range of parameters the accuracy of consensus is significantly improved in comparison to the accuracy of a single sensing agent. However, we show that such critical parameters cannot be predicted since they are highly dependent on the system configuration. To address this problem, we develop the Autonomous Adaptive Tuning (AAT) algorithm, which controls the parameters of each agent individually and gradually tunes the system into the critical mode of collective behaviour. AAT is the �rst decentralised algorithm which improves accuracy in settings where communication is strictly limited to opinion sharing. As a result of applying AAT, 80-90% of the agents in a large system form the correct opinion, in contrast to 60-75% for the state-of-the-art message-passing algorithm proposed for these settings, known as DACOR. Additionally, we test other research requirements by evaluating teams with different sizes and network topologies, and thereby demonstrate that AAT is both scalable and adaptive. Finally, we showed that AAT is highly robust since it significantly improves the accuracy of consensus even when only being deployed in 10% of the agents in a large heterogeneous system. However, AAT is designed for settings in which agents do not di�erentiate their opinion sources, whilst in many other opinion sharing scenarios agents can learn who their sources are. Therefore, we design the IndividualWeights Tuning (IWT) algorithm, which can benefit from such additional information. IWT is the firrst behavioural algorithm that differentiates between the peers of an agent in solving the problem of improving the accuracy of consensus. Agents running IWT attribute higher weights to opinions from peers which deliver the most surprising opinions. Crucially, by incorporating information about the source of an opinion, IWT outperforms AAT for systems with dense communication networks. Considering that IWT has higher computational cost than AAT, we conclude that IWT is more bene�cial to use in dense networks while AAT delivers a similar level of accuracy improvement in sparse networks, but with a lower computational cost.
50

Pre-emptive type checking in dynamically typed programs

Grech, Neville January 2013 (has links)
With the rise of languages such as JavaScript, dynamically typed languages have gained a strong foothold in the programming language landscape. These languages are very well suited for rapid prototyping and for use with agile programming methodologies. However, programmers would benefit from the ability to detect type errors in their code early, without imposing unnecessary restrictions on their programs. Here we describe a new type inference system that identifies potential type errors through a flow-sensitive static analysis. This analysis is invoked at a very late stage, after the compilation to bytecode and initialisation of the program. It computes for every expression the variable’s present (from the values that it has last been assigned) and future (with which it is used in the further program execution) types, respectively. Using this information, our mechanism inserts type checks at strategic points in the original program. We prove that these checks, inserted as early as possible, preempt type errors earlier than existing type systems. We further show that these checks do not change the semantics of programs that do not raise type errors. Preemptive type checking can be added to existing languages without the need to modify the existing runtime environment. We show this with an implementation for the Python language and demonstrate its effectiveness on a number of benchmarks.

Page generated in 0.182 seconds