121 |
What makes an interruption disruptive? : understanding the effects of interruption relevance and timing on performanceGould, A. J. J. January 2014 (has links)
Interruptions disrupt activity, hindering performance and provoking errors. They present an obvious challenge in safety-critical environments where momentary slips can have fatal consequences. Interruptions are also a problem in more workaday settings, like offices, where they can reduce productivity and increase stress levels. To be able to systematically manage the negative effects of interruptions, we first need to understand the factors that influence their disruptiveness. This thesis explores how the disruptiveness of interruptions is influenced by their relevance and timing. Seven experimental studies investigate these properties in the context of a routine data-entry task. The first three experiments explore how relevance and timing interact. They demonstrate that the relevance of interruptions depends on the contents of working memory at the moment of interruption. Next, a pair of experiments distinguish the oft-conflated concepts of interruption relevance and relatedness. They show that interruptions with similar content to the task at hand can negatively affect performance if they do not contribute toward the rehearsal of goals in working memory. By causing active interference, seemingly useful interruptions that are related to the task at hand have the potential to be more disruptive than entirely unrelated, irrelevant interruptions. The final two experiments in this thesis test the reliability of the effects observed in the first five experiments through alternative experimental paradigms. They show that relevance and timing effects are consistent even when participants are given control over interruptions and that these effects are robust even in an online setting where experimental control is compromised. The work presented in this thesis enhances our understanding of the factors influencing the disruptiveness of interruptions. Its primary contribution is to show that when we talk about interruptions, ‘relevance’, ‘irrelevance’ and ‘relatedness’ must be considered in the context of the contents of working memory at the moment of interruption. This finding has implications for experimental investigations of interrupted performance, efforts to under- stand the effects of interruptions in the workplace, and the development of systems that help users manage interruptions.
|
122 |
Computational analytics for venture financeStone, T. R. January 2014 (has links)
This thesis investigates the application of computational analytics to the domain of venture finance – the deployment of capital to high-risk ventures in pursuit of maximising financial return. Traditional venture finance is laborious and highly inefficient. Whilst high street banks approve (or reject) personal loans in a matter of minutes It takes an early-stage venture capital (VC) firm months to put a term sheet in front of a fledgling new venture. Whilst these are fundamentally different forms of finance (longer return period, larger investments, different risk profiles) a more data-informed and analytical approach to venture finance is foreseeable. We have surveyed existing software tools in relation to the venture capital investment process and stage of investment. We find that analytical tools are nascent and use of analytics in industry is limited. To date only a small handful of venture capital firms have publicly declared their use of computational analytical methods in their decision making and investment selection process. This research has been undertaken with several industry partners including venture capital firms, seed accelerators, universities and other related organisations. Within our research we have developed a prototype software tool NVANA: New Venture Analytics – for assessing new ventures and screening prospective deal flow. Over £20,000 in early-stage funding was distributed with hundreds of new ventures assessed using the system. Both the limitations of our prototype and extensions are discussed. We have focused on computational analytics in the context of three sub-components of the NVANA system. Firstly, improving the classification of private companies using supervised and multi-label classification techniques to develop a novel form of industry classification. Secondly, we have investigated the potential to benchmark private company performance based upon a company's ``digital footprint''. Finally, the novel application of collaborative filtering and content-based recommendation techniques to the domain of venture finance. We conclude by discussing the future potential for computational analytics to increase efficiency and performance within the venture finance domain. We believe there is clear scope for assisting the venture capital investment process. However, we have identified limitations and challenges in terms of access to data, stage of investment and adoption by industry.
|
123 |
A network model of financial marketsRice, O. January 2015 (has links)
This thesis introduces a network representation of equity markets. The model is based on the premise that assets share dependencies on abstract ‘factors’ resulting in exploitable patterns among asset price levels. The network model is a collection of long-run market trends estimated by a 3 layer machine learning framework. The network model’s comprehensive validity is established with 2 simulations in the fields of algorithmic trading, and systemic risk. The algorithmic trading validation applies expectations derived from the network model to estimating expected future returns. It further utilizes the network’s expectations to actively manage a theoretically market neutral portfolio. The validation demonstrates that the network model’s portfolio generates excess returns relative to 2 benchmarks. Over the time period of April, 2007 to January, 2014 the network model’s portfolio for assets drawn from the S&P/ASX 100 produced a Sharpe ratio of 0.674. This approximately doubles the nearest benchmark. The systemic risk validation utilized the network model to simulate shocks to select market sectors and evaluate the resulting financial contagion. The validation successfully differentiated sectors by systemic connectivity levels and suggested some interesting market features. Most notable was the identification of the ‘Financials’ sector as most systemically influential and ‘Basic Materials’ as the most systemically dependent. Additionally, there was evidence that ‘Financials’ may function as a hub of systemic risk which exacerbates losses from multiple market sectors.
|
124 |
Optimizations in algebraic and differential cryptanalysisMourouzis, T. January 2015 (has links)
In this thesis, we study how to enhance current cryptanalytic techniques, especially in Differential Cryptanalysis (DC) and to some degree in Algebraic Cryptanalysis (AC), by considering and solving some underlying optimization problems based on the general structure of the algorithm. In the first part, we study techniques for optimizing arbitrary algebraic computations in the general non-commutative setting with respect to several metrics [42, 44]. We apply our techniques to combinatorial circuit optimization and Matrix Multiplication (MM) problems [30, 44]. Obtaining exact bounds for such problems is very challenging. We have developed a 2- step technique, where firstly we algebraically encode the problem and then we solve the corresponding CNF-SAT problem using a SAT solver. We apply this methodology to optimize small circuits such as S-boxes with respect to a given metric and to discover new bilinear algorithms for multiplying sufficiently small matrices. We have obtained the best bit-slice implementation of PRESENT S-box currently known [6]. Furthermore, this technique allows us to compute the Multiplicative Complexity (MC) of whole ciphers [23], a very important measure of the non-linearity of a cipher [20, 44]. Another major theme in this thesis is the study of advanced differential attacks on block ciphers. We suggest a general framework, which enhances current differential cryptanalytic techniques and we apply it to evaluate the security of GOST block cipher [63, 102, 107]. We introduce a new type of differential sets based on the connections be- tween the S-boxes, named “general open sets” [50, 51], which can be seen as a refinement of Knudsen’s truncated differentials [84]. Using this notion, we construct 20-round statistical distinguishers and then based on this construction we develop attacks against full 32-rounds. Our attacks are in the form of Depth-First key search with many technical steps subject to optimization. We validate and analyze in detail each of these steps in an attempt to provide a solid formulation for our advanced differential attacks.
|
125 |
Stronger secrecy for network-facing applications through privilege reductionMarchenko, P. January 2014 (has links)
Despite significant effort in improving software quality, vulnerabilities and bugs persist in applications. Attackers remotely exploit vulnerabilities in network-facing applications and then disclose and corrupt users' sensitive information that these applications process. Reducing privilege of application components helps to limit the harm that an attacker may cause if she exploits an application. Privilege reduction, i.e., the Principle of Least Privilege, is a fundamental technique that allows one to contain possible exploits of error-prone software components: it entails granting a software component the minimal privilege that it needs to operate. Applying this principle ensures that sensitive data is given only to those software components that indeed require processing such data. This thesis explores how to reduce the privilege of network-facing applications to provide stronger confidentiality and integrity guarantees for sensitive data. First, we look into applying privilege reduction to cryptographic protocol implementations. We address the vital and largely unexamined problem of how to structure implementations of cryptographic protocols to protect sensitive data even in the case when an attacker compromises untrusted components of a protocol implementation. As evidence that the problem is poorly understood, we identified two attacks which succeed in disclosing of sensitive data in two state-of-the-art, exploit-resistant cryptographic protocol implementations: the privilege-separated OpenSSH server and the HiStar/DStar DIFC-based SSL web server. We propose practical, general, system-independent principles for structuring protocol implementations to defend against these two attacks. We apply our principles to protect sensitive data from disclosure in the implementations of both the server and client sides of OpenSSH and of the OpenSSL library. Next, we explore how to reduce the privilege of language runtimes, e.g., the JavaScript language runtime, so as to minimize the risk of their compromise, and thus of the disclosure and corruption of sensitive information. Modern language runtimes are complex software involving such advanced techniques as just-in-time compilation, native-code support routines, garbage collection, and dynamic runtime optimizations. This complexity makes it hard to guarantee the safety of language runtimes, as evidenced by the frequency of the discovery of vulnerabilities in them. We provide new mechanisms that allow sandboxing language runtimes using Software-based Fault Isolation (SFI). In particular, we enable sandboxing of runtime code modification, which modern language runtimes depend on heavily for achieving high performance. We have applied our sandboxing techniques to the V8 Javascript engine on both the x86-32 and x86-64 architectures, and found that the techniques incur only moderate performance overhead. Finally, we apply privilege reduction within the web browser to secure sensitive data within web applications. Web browsers have become an attractive target for attackers because of their widespread use. There are two principal threats to a user's sensitive data in the browser environment: untrusted third-party extensions and untrusted web pages. Extensions execute with elevated privilege which allows them to read content within all web applications. Thus, a malicious extension author may write extension code that reads sensitive page content and sends it to a remote server he controls. Alternatively, a malicious page author may exploit an honest but buggy extension, thus leveraging its elevated privilege to disclose sensitive information from other origins. We propose enforcing privilege reduction policies on extension JavaScript code to protect web applications' sensitive data from malicious extensions and malicious pages. We designed ScriptPolice, a policy system for the Chrome browser's V8 JavaScript language runtime, to enforce flexible security policies on JavaScript execution. We restrict the privileges of a variety of extensions and contain any malicious activity whether introduced by design or injected by a malicious page. The overhead ScriptPolice incurs on extension execution is acceptable: the added page load latency caused by ScriptPolice is so short as to be virtually indistinguishable by users.
|
126 |
Feature selection in computational biologyAthanasakis, D. January 2014 (has links)
This thesis concerns feature selection, with a particular emphasis on the computational biology domain and the possibility of non-linear interaction between features. Towards this it establishes a two-step approach, where the first step is feature selection, followed by the learning of a kernel machine in this reduced representation. Optimization of kernel target alignment is proposed as a model selection criterion and its properties are established for a number of feature selection algorithms, including some novel variants of stability selection. The thesis further studies greedy and stochastic approaches for optimizing alignment, propos- ing a fast stochastic method with substantial probabilistic guarantees. The proposed stochastic method compares favorably to its deterministic counterparts in terms of computational complexity and resulting accuracy. The characteristics of this stochastic proposal in terms of computational complexity and applicabil- ity to multi-class problems make it invaluable to a deep learning architecture which we propose. Very encouraging results of this architecture in a recent challenge dataset further justify this approach, with good further results on a signal peptide cleavage prediction task. These proposals are evaluated in terms of generalization accuracy, interpretability and numerical stability of the models, and speed on a number of real datasets arising from infectious disease bioinfor- matics, with encouraging results.
|
127 |
Putting the user at the centre of the grid : simplifying usability and resource selection for high performance computingZasada, S. J. January 2015 (has links)
Computer simulation is finding a role in an increasing number of scientific disciplines, concomitant with the rise in available computing power. Realizing this inevitably re- quires access to computational power beyond the desktop, making use of clusters, supercomputers, data repositories, networks and distributed aggregations of these re- sources. Accessing one such resource entails a number of usability and security prob- lems; when multiple geographically distributed resources are involved, the difficulty is compounded. However, usability is an all too often neglected aspect of computing on e-infrastructures, although it is one of the principal factors militating against the widespread uptake of distributed computing. The usability problems are twofold: the user needs to know how to execute the applications they need to use on a particular resource, and also to gain access to suit- able resources to run their workloads as they need them. In this thesis we present our solutions to these two problems. Firstly we propose a new model of e-infrastructure resource interaction, which we call the user–application interaction model, designed to simplify executing application on high performance computing resources. We describe the implementation of this model in the Application Hosting Environment, which pro- vides a Software as a Service layer on top of distributed e-infrastructure resources. We compare the usability of our system with commonly deployed middleware tools using five usability metrics. Our middleware and security solutions are judged to be more usable than other commonly deployed middleware tools. We go on to describe the requirements for a resource trading platform that allows users to purchase access to resources within a distributed e-infrastructure. We present the implementation of this Resource Allocation Market Place as a distributed multi- agent system, and show how it provides a highly flexible, efficient tool to schedule workflows across high performance computing resources.
|
128 |
New tractography methods based on parametric models of white matter fibre dispersionRowe, M. C. January 2015 (has links)
Diffusion weighted magnetic resonance imaging (DW-MRI) is a powerful imaging technique that can probe the complex structure of the body, revealing structural trends which exist at scales far below the voxel resolution. Tractography utilises the information derived from DW-MRI to examine the structure of white matter. Using information derived from DW-MRI, tractography can estimate connectivity between distinct, functional cortical and sub-cortical regions of grey matter. Understanding how seperate functional regions of the brain are connected as part of a network is key to understanding how the brain works. Tractography has been used to deliniate many known white matter structures and has also revealed structures not fully understood from anatomy due to limitations of histological examination. However, there still remain many shortcomings of tractography, many anatomical features for which tractography algorithms are known to fail, which leads to discrepancies between known anatomy and tractography results. With the aim of approaching a complete picture of the human connectome via tractography, we seek to address the shortcomings in current tractography techniques by exploiting new advances in modelling techniques used in DW-MRI, which provide more accurate representation of underlying white matter anatomy. This thesis introduces a methodology for fully utilising new tissue models in DWMRI to improve tractography. It is known from histology that there are regions of white matter where fibres disperse or curve rapidly at length scales below the DW-MRI voxel resolution. One area where dispersion is particularly prominent is the corona radiata. New DW-MRI models capture dispersion utilising specialised parametric probability distributions. We present novel tractography algorithms utilising these parametric models of dispersion in tractography to improve connectivity estimation in areas of dispersing fibres. We first present an algorithm utilising the the new parametric models of dispersion for tractography in a simple Bayesian framework. We then present an extension to this algorithm which introduces a framework to pool neighbourhood information from multiple voxels in the neighbournhood surrounding the tract in order to better estimate connectivity, introducing the new concept of the neighbourhood-informed orientation distribution function (NI-ODF). Specifically, using neighbourhood exploration we address the ambiguity arising in ’fanning polarity’. In regions of dispersing fibres, the antipodal symmetry inherent in DW-MRI makes it impossible to resolve the polarity of a dispersing fibre configuration from a local voxel-wise model in isolation, by pooling information from neighbouring voxels, we show that this issue can be addressed. We evaluate the newly proposed tractography methods using synthetic phantoms simulating canonical fibre configurations and validate the ability to effectively navigate regions of dispersing fibres and resolve fanning polarity. We then validate that the algorithms perform effectively in real in vivo data, using DW-MRI data from 5 healthy subjects. We show that by utilising models of dispersion, we recover a wider range of connectivity compared to other standard algorithms when tracking through an area of the brain known to have significant white fibre dispersion - the corona radiata. We then examine the impact of the new algorithm on global connectivity estimates in the brain. We find that whole brain connectivity networks derived using the new tractography method feature strong connectivity between frontal lobe regions. This is in contrast to networks derived using competing tractography methods which do not account for sub-voxel fibre dispersion. We also compare thalamo-cortical connectivity estimated using the newly proposed tractography method and compare with a compteing tractography method, finding that the recovered connectivity profiles are largely similar, with some differences in thalamo-cortical connections to regions of the frontal lobe. The results suggest that fibre dispersion is an important structural feature to model in the basis of a tractography algorithm, as it has a strong effect on connectivity estimation.
|
129 |
Designing and evaluating a contextual mobile learning application to support situated learningAlnuaim, A. January 2015 (has links)
This research emerged from seeking to identify ways of getting Human-Computer Interaction Design students into real world environments, similar to those in which they will eventually be designing, thus maximising their ability to identify opportunities for innovation. In helping students learn how to become proficient and innovative designers and developers, it is crucial that their ‘out of the classroom’ experience of the environments in which their designs will be used, augments and extends in-class learning. The aim of this research is to investigate firstly, a blended learning model for students in higher education using mobile technology for situated learning and, secondly, the process of designing a mobile learning app within this blended learning model. This app was designed, by the author, to support students in a design task and to develop their independent learning and critical thinking skills, as part of their Human-Computer Interaction coursework. The first stage in designing the system was to conduct a comprehensive contextual inquiry to understand specific student and staff needs in the envisaged scenario. In addition, this research explores the challenges in implementing and deploying such an app in the learning context. A number of evaluations were conducted to assess the design, usability and effectiveness of the app, which we have called sLearn. The results show an improvement in scores and quality of assessed work completed with the support of the sLearn app and a positive response from students regarding its usability and pedagogic utility. The promising results show that the app has helped students in developing critical thinking and independent learning skills. The research also considers the challenges of conducting an ecologically valid study of such interventions in a higher education setting. There were issues discovered in regards to the context of use such as usability of interface elements and feeling self-conscious in using the app in a public place. The model was tested with two other student cohorts: User Experience and Engineering students, to further investigate best practice in deploying mobile learning in higher education and examine the suitability of this learning model for different disciplines. These trials suggest that the model is indeed suitable and, the engineering study in particular has demonstrated that it has the potential to support the learning in-situ of students from non-computing disciplines.
|
130 |
Uncertainty and uncertainty tolerance in service provisioningAbdullah, Johari January 2014 (has links)
Service, in general term is a type of economic activity where the consumers utilize labour and/or expertise of others to perform a specific task. The birth and continued growth of the Internet provide a new medium for services to be delivered, and enable services to become widely and readily available. In recent years, the Internet has become an important platform to provide services to the end users. Service provisioning, In the context of computing, is the process of providing users with access to data and technology resources. In a perfect operating environment, the entities involved can expect the system will perform as intended or up to an accepted level of quality. Unfortunately, disruptions or failures can occur which can affect the operation of the service. Thus, the entities involved, in particular the service requester faces a situation whereby the service requester’s belief towards certain process in the service provisioning life cycle is affected, i.e. deviates from the actual truth. This situation whereby the service requester’s belief is affected is referred as an uncertainty. in this thesis, we discuss and explore the issue of uncertainty throughout the service provisioning life cycle and provide a measure to tolerate uncertainty in service provisioning offer through the application of subjective probability framework. This thesis provides several key contributions to address the uncertainty issues in service provision- Ing system in particular, for a service requester to overcome the negative consequence of uncertainty. The key contributions are: (1) introduction to the issue of uncertainty in service provisioning system, (2) a new classification scheme for uncertainties in service provisioning system, (3) a unified view of uncertainty in service provisioning system based on temporal classification, which is linked to service requester’s view, (4) a concept of uncertainty tolerance for service provisioning, (5) an approach and framework for automated uncertainty tolerance in service provisioning offer. The approach and framework for uncertainty tolerance in service provisioning offer presented in this thesis is evaluated through an empirical study. The result from the study shows the viability of the approach and framework of the uncertainty tolerance Mechanism through the application of subjective probability theory. The result also shows the positive outcome of the mechanism in term of higher cumulative utility, and better acceptance rate for the service requester.
|
Page generated in 0.0431 seconds