• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 763
  • 170
  • 24
  • 21
  • 21
  • 21
  • 21
  • 21
  • 21
  • 6
  • 6
  • 4
  • 1
  • 1
  • Tagged with
  • 2872
  • 2872
  • 2521
  • 2129
  • 1312
  • 553
  • 527
  • 462
  • 443
  • 382
  • 373
  • 306
  • 262
  • 223
  • 208
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
361

Using current uptime to improve failure detection in peer-to-peer networks

Price, Richard Michael January 2010 (has links)
Peer-to-Peer (P2P) networks share computer resources or services through the exchange of information between participating nodes. These nodes form a virtual network overlay by creating a number of connections with one another. Due to the transient nature of nodes within these systems any connection formed should be monitored and maintained to ensure the routing table is kept up-to-date. Typically P2P networks predefine a fixed keep-alive period, a maximum interval in which connected nodes must exchange messages. If no other message has been sent within this interval then keep-alive messages are exchanged to ensure the corresponding node has not left the system. A fixed periodic interval can be viewed as a centralised, static and deterministic mechanism; maintaining overlays in an predictable, reliable and non-adaptive fashion. Several studies have shown that older peers are more likely to remain in the network longer than their short-lived counterparts. Therefore using the distribution of peer session times and the current age of peers as key attributes, we propose three algorithms which allow connections to extend the interval between successive keep-alive messages based upon the likelihood that a corresponding node will remain in the system. By prioritising keep-alive messages to nodes that are more likely to fail, our algorithms reduce the expected delay between failures occurring and their subsequent detection. Using extensively empirical analysis, we analyse the properties of these algorithms and compare them to the standard periodic approach in unstructured and structured network topologies, using tracedriven simulations based upon measured network data. Furthermore we also investigate the effect of nodes that misreport their age upon our adaptive algorithms and detail an efficient keep-alive algorithm that can adapt to the limitations network address translation devices.
362

Mitigating private key compromise

Yu, Jiangshan January 2016 (has links)
Cryptosystems rely on the assumption that the computer end-points can securely store and use cryptographic keys. Yet, this assumption is rather hard to justify in practice. New software vulnerabilities are discovered every day, and malware is pervasive on mobile devices and desktop PCs. This thesis provides research on how to mitigate private key compromise in three different cases. The first case considers compromised signing keys of certificate authorities in public key infrastructure. To address this problem, we analyse and evaluate existing prominent certificate management systems, and propose a new system called "Distributed and Transparent Key Infrastructure", which is secure even if all service providers collude together. The second case considers the key compromise in secure communication. We develop a simple approach that either guarantees the confidentiality of messages sent to a device even if the device was previously compromised, or allows the user to detect that confidentiality failed. We propose a multi-device messaging protocol that exploits our concept to allow users to detect unauthorised usage of their device keys. The third case considers the key compromise in secret distribution. We develop a self-healing system, which provides a proactive security guarantee: an attacker can learn a secret only if s/he can compromise all servers simultaneously in a short period.
363

Flexible robotic control via co-operation between an operator and an AI-based control system

Chiou, Emmanouil January 2017 (has links)
This thesis addresses the problem of variable autonomy in teleoperated mobile robots. Variable autonomy refers to the approach of incorporating several different levels of autonomous capabilities (Level(s) of Autonomy (LOA)) ranging from pure teleoperation (human has complete control of the robot) to full autonomy (robot has control of every capability), within a single robot. Most robots used for demanding and safety critical tasks (e.g. search and rescue, hazardous environments inspection), are currently teleoperated in simple ways, but could soon start to benefit from variable autonomy. The use of variable autonomy would allow Artificial Intelligence (AI) control algorithms to autonomously take control of certain functions when the human operator is suffering a high workload, high cognitive load, anxiety, or other distractions and stresses. In contrast, some circumstances may still necessitate direct human control of the robot. More specifically, this thesis is focused on investigating the issues of dynamically changing LOA (i.e. during task execution) using either Human-Initiative (HI) orMixed-Initiative (MI) control. MI refers to the peer-to-peer relationship between the robot and the operator in terms of the authority to initiate actions and LOA switches. HI refers to the human operators switching LOA based on their judgment, with the robot having no capacity to initiate LOA switches. A HI and a novel expert-guided MI controller are presented in this thesis. These controllers were evaluated using a multidisciplinary systematic experimental framework, that combines quantifiable and repeatable performance degradation factors for both the robot and the operator. The thesis presents statistically validated evidence that variable autonomy, in the form of HI and MI, provides advantages compared to only using teleoperation or only using autonomy, in various scenarios. Lastly, analyses of the interactions between the operators and the variable autonomy systems are reported. These analyses highlight the importance of personality traits and preferences, trust in the system, and the understanding of the system by the human operator, in the context of HRI with the proposed controllers.
364

A continuous computational interpretation of type theories

Xu, Chuangjie January 2015 (has links)
This thesis provides a computational interpretation of type theory validating Brouwer’s uniform-continuity principle that all functions from the Cantor space to natural numbers are uniformly continuous, so that type-theoretic proofs with the principle as an assumption have computational content. For this, we develop a variation of Johnstone’s topological topos, which consists of sheaves on a certain uniform-continuity site that is suitable for predicative, constructive reasoning. Our concrete sheaves can be described as sets equipped with a suitable continuity structure, which we call C-spaces, and their natural transformations can be regarded as continuous maps. The Kleene-Kreisel continuous functional can be calculated within the category of C-spaces. Our C-spaces form a locally cartesian closed category with a natural numbers object, and hence give models of Gödel’s system T and of dependent type theory. Moreover, the category has a fan functional that continuously compute moduli of uniform continuity, which validates the uniform-continuity principle formulated as a skolemized formula in system T and as a type via the Curry-Howard interpretation in dependent type theory. We emphasize that the construction of C-spaces and the verification of the uniform-continuity principles have been formalized in intensional Martin-Löf type theory in Agda notation.
365

Computing relatively large algebraic structures by automated theory exploration

Mahesar, Quratul-ain January 2014 (has links)
Automated reasoning technology provides means for inference in a formal context via a multitude of disparate reasoning techniques. Combining different techniques not only increases the effectiveness of single systems but also provides a more powerful approach to solving hard problems. Consequently combined reasoning systems have been successfully employed to solve non-trivial mathematical problems in combinatorially rich domains that are intractable by traditional mathematical means. Nevertheless, the lack of domain specific knowledge often limits the effectiveness of these systems. In this thesis we investigate how the combination of diverse reasoning techniques can be employed to pre-compute additional knowledge to enable mathematical discovery in finite and potentially infinite domains that is otherwise not feasible. In particular, we demonstrate how we can exploit bespoke symbolic computations and automated theorem proving to automatically compute and evolve the structural knowledge of small size finite structures in the algebraic theory of quasigroups. This allows us to increase the solvability horizon of model generation systems to find solution models for large size finite algebraic structures previously unattainable. We also present an approach to exploring infinite models using a mixture of automated tools and user interaction to iteratively inspect the structure of solutions and refine search. A practical implementation combines a specialist term rewriting system with bespoke graph algorithms and visualization tools and has been applied to solve the generalized version of Kuratowski's classical closure-complement problem from point-set topology that had remained open for several years.
366

Developing artificial life simulations of vegetation to support the virtual reconstruction of ancient landscapes

Ch'ng, Eugene January 2007 (has links)
Research in Virtual Heritage has gained popularity in recent years. Efforts by the community of Virtual Heritage researchers to reconstruct sites considered worthy of preservation span from the historical “built environment”, including the Pyramids at Ghiza and Virtual Reality Notre Dame, to natural heritage sites such as Australia’s Great Barrier Reef and the Virtual Everglades at Florida. Other important efforts to conserve artefacts and educate visitors include Virtual Stonehenge, Pompeii and the Caves of Lascaux. Entire villages, cities and even caves have been constructed as part of virtual conservation efforts. These digital reconstructions have, to date, contributed significant awareness and interest among the general public, providing educational benefits to schoolchildren and new research opportunities to archaeologists and conservationists, to mention but two groups of beneficiaries. Today, to paraphrase the work of Professor Robert J. Stone, Virtual Heritage strives to deliver to a global audience, computer-based reconstructions of artefacts, sites and actors of historic, artistic, religious and cultural heritage in such a way as to provide formative educational experience through the manipulations of time and space. It is realised that the user experience and educational value of a Virtual Heritage site is crucial – the process of virtual reconstruction is as important as its outcome. The total experience therefore, hinges on the modelling accuracy, scientific credibility, and the interactive visualisation capability of a virtual site. However, many interactive media implementations in Virtual Heritage in the recent past have failed to make full use of the advanced interactive visualisation techniques available to researchers. In particular, an element that many end users might consider essential, namely the inclusion of “living” and responsive virtual agents are noticeably lacking in most all Virtual Heritage examples. The addition of these ‘living’ entities and environments could give Virtual Heritage applications a richer, more evolvable content, and a higher level of interactivity. Artificial Life (alife), an emerging research area dealing with the study of synthetic systems that exhibit behaviours characteristic of natural living systems, offers great potential in overcoming this missing element in current Virtual Heritage applications. The present research investigates the feasibility of constructing models of vegetation, exploiting new developments in Artificial Life implemented within a controlled Virtual Environment for application in the field of Archaeology. The specific area of study is the recently discovered and recently named Shotton river valley off the eastern coast of the United Kingdom – a region that once flourished during the Mesolithic Era prior to the post-glacial flooding of the North Sea.
367

Self-aware and self-adaptive autoscaling for cloud based services

Chen, Tao January 2016 (has links)
Modern Internet services are increasingly leveraging on cloud computing for flexible, elastic and on-demand provision. Typically, Quality of Service (QoS) of cloud-based services can be tuned using different underlying cloud configurations and resources, e.g., number of threads, CPU and memory etc., which are shared, leased and priced as utilities. This benefit is fundamentally grounded by autoscaling: an automatic and elastic process that adapts cloud configurations on-demand according to time-varying workloads. This thesis proposes a holistic cloud autoscaling framework to effectively and seamlessly address existing challenges related to different logical aspects of autoscaling, including architecting autoscaling system, modelling the QoS of cloudbased service, determining the granularity of control and deciding trade-off autoscaling decisions. The framework takes advantages of the principles of self-awareness and the related algorithms to adaptively handle the dynamics, uncertainties, QoS interference and trade-offs on objectives that are exhibited in the cloud. The major benefit is that, by leveraging the framework, cloud autoscaling can be effectively achieved without heavy human analysis and design time knowledge. Through conducting various experiments using RUBiS benchmark and realistic workload on real cloud setting, this thesis evaluates the effectiveness of the framework based on various quality indicators and compared with other state-of-the-art approaches.
368

Digital traces of human mobility and interaction : models and applications

Lima, Antonio January 2016 (has links)
In the last decade digital devices and services have permeated many aspects of everyday life. They generate massive amounts of data that provide insightful information about how people move across geographic areas and how they interact with others. By analysing this detailed information, it is possible to investigate aspects of human mobility and interaction. Therefore, the thesis of this dissertation is that the analysis of mobility and interaction traces generated by digital devices and services, at different timescales and spatial granularity, can be used to gain a better understanding of human behaviour, build new applications and improve existing services. In order to substantiate this statement I develop analytical models and applications supported by three sources of mobility and interaction data: online social networks, mobile phone networks and GPS traces. First, I present three applications related to data gathered from online social networks, namely the analysis of a global rumour spreading in Twitter, the definition of spatial dissemination measures in a social graph and the analysis of collaboration between developers in GitHub. Then I describe two applications of the analysis of country-wide data of cellular phone networks: the modelling of epidemic containment strategies, with the goal of assessing their efficacy in curbing infectious diseases; the definition of a mobility-based measure of individual risk, which can be used to identify who needs targeted treatment. Finally, I present two applications based on GPS traces: the estimation of trajectories from spatially-coarse temporally-sparse location traces and the analysis of routing behaviour in urban settings.
369

Economics-driven approach for self-securing assets in cloud

Tziakouris, Giannis January 2017 (has links)
This thesis proposes the engineering of an elastic self-adaptive security solution for the Cloud that considers assets as independent entities, with a need for customised, ad-hoc security. The solution exploits agent-based, market-inspired methodologies and learning approaches for managing the changing security requirements of assets by considering the shared and on-demand nature of services and resources while catering for monetary and computational constraints. The usage of auction procedures allows the proposed framework to deal with the scale of the problem and the trade-offs that can arise between users and Cloud service provider(s). Whereas, the usage of a learning technique enables our framework to operate in a proactive, automated fashion and to arrive on more efficient bidding plans, informed by historical data. A variant of the proposed framework, grounded on a simulated university application environment, was developed to evaluate the applicability and effectiveness of this solution. As the proposed solution is grounded on market methods, this thesis is also concerned with asserting the dependability of market mechanisms. We follow an experimentally driven approach to demonstrate the deficiency of existing market-oriented solutions in facing common market-specific security threats and provide candidate, lightweight defensive mechanisms for securing them against these attacks.
370

Probabilistic roadmaps in uncertain environments

Kneebone, M. L. January 2010 (has links)
Planning under uncertainty is a common requirement of robot navigation. Probabilistic roadmaps are an efficient method for generating motion graphs through the robot's configuration space, but do not inherently represent any uncertainty in the environment. In this thesis, the physical domain is abstracted into a graph search problem where the states of some edges are unknown. This is modelled as a decision-theoretic planning problem described through a partially observable Markov Decision Process (POMDP). It is shown that the optimal policy can depend on accounting for the value of information from observations. The model scalability and the graph size that can be handled is then extended by conversion to a belief state Markov Decision Process. Approximations to both the model and the planning algorithm are demonstrated that further extend the scalability of the techniques for static graphs. Experiments conducted verify the viability of these approximations by producing near-optimal plans in greatly reduced time compared to recent POMDP solvers. Belief state approximation in the planner reduces planning time significantly while producing plans of equal quality to those without this approximation. This is shown to be superior to other techniques such as heuristic weighting which is not found to give any significant benefit to the planner.

Page generated in 0.0928 seconds