• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 10
  • 2
  • 1
  • Tagged with
  • 1325
  • 1313
  • 1312
  • 1312
  • 1312
  • 192
  • 164
  • 156
  • 129
  • 99
  • 93
  • 79
  • 52
  • 51
  • 51
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
11

Engineer-computer interaction for structural monitoring

Stalker, R. January 2000 (has links)
No description available.
12

A framework for continuous, transparent authentication on mobile devices

Crawford, Heather Anne January 2012 (has links)
Mobile devices have consistently advanced in terms of processing power, amount of memory and functionality. With these advances, the ability to store potentially private or sensitive information on them has increased. Traditional methods for securing mobile devices, passwords and PINs, are inadequate given their weaknesses and the bursty use patterns that characterize mobile devices. Passwords and PINs are often shared or weak secrets to ameliorate the memory load on device owners. Furthermore, they represent point-of-entry security, which provides access control but not authentication. Alternatives to these traditional meth- ods have been suggested. Examples include graphical passwords, biometrics and sketched passwords, among others. These alternatives all have their place in an authentication toolbox, as do passwords and PINs, but do not respect the unique needs of the mobile device environment. This dissertation presents a continuous, transparent authentication method for mobile devices called the Transparent Authentication Framework. The Framework uses behavioral biometrics, which are patterns in how people perform actions, to verify the identity of the mobile device owner. It is transparent in that the biometrics are gathered in the background while the device is used normally, and is continuous in that verification takes place regularly. The Framework requires little effort from the device owner, goes beyond access control to provide authentication, and is acceptable and trustworthy to device owners, all while respecting the memory and processor limitations of the mobile device environment.
13

Quality assessment of service providers in a conformance-centric Service Oriented Architecture

Shercliff, Gareth January 2009 (has links)
In a Service Oriented Architecture (SOA), the goal of consumers is to discover and use services which lead to them experiencing the highest quality, such that their expectations and needs are satisfied. In supporting this discovery, quality assessment tools are required to establish the degree to which these expectations will be met by specific services. Traditional approaches to quality assessment in SOA assume that providers and consumers of services will adopt a performance-centric view of quality, which assumes that consumers will be most satisfied when they receive the highest absolute performance. However, adopting this approach does not consider the subjective nature of quality and will not necessarily lead to consumers receiving services that meet their individual needs. By using existing approaches to quality assessment that assume a consumer's primary goal as being optimisation of performance, consumers in SOA are currently unable to effectively identify and engage with providers who deliver services that will best meet their needs. Developing approaches to assessment that adopt a more conformance-centric view of quality (where it is assumed that consumers are most satisfied when a service meets, but not necessarily exceeds, their individual expectations) is a challenge that must be addressed if consumers are to effectively adopt SOA as a means of accessing services. In addressing the above challenge, this thesis develops a conformance-centric model of an SOA in which conformance is taken to be the primary goal of consumers. This model is holistic, in that it considers consumers, providers and assessment services and their relationship; and novel in that it proposes a set of rational provider behaviours that would be adopted in using a conformance-centric view of quality. Adopting such conformance-centric behaviour leads to observable and predictable patterns in the performance of the services offered by providers, due to the relationship that exists between the level of service delivered by the service and the expectation of the consumer. In order to support consumers in the discovery of high quality services, quality assessment tools must be able to effectively assess past performance information about services, and use this as a prediction of future performance. In supporting consumers within a conformance-centric SOA, this thesis proposes and evaluates a new set of approaches to quality assessment which make use of the patterns in provider behaviour described above. The approaches developed are non-trivial – using a selection of adapted pattern classification and other statistical techniques to infer the behaviour of individual services at run-time and calculating a numerical measure of confidence for each result that can be used by consumers to combine assessment information with other evidence. The quality assessment approaches are evaluated within a software implementation of a conformance-centric SOA, whereby they are shown to lead to consumers experiencing higher quality than with existing performance-centric approaches. By introducing conformance-centric principles into existing real-world SOA, consumers will be able to evaluate and engage with providers that offer services that have been differentiated based on consumer expectation. The benefits of such capability over the current state-of-the-art in SOA are twofold. Firstly, individual consumers will receive higher quality services, and therefore will increase the likelihood of their needs being effectively satisfied. Secondly, the availability of assessment tools which acknowledge the conformance-centric nature of consumers will encourage providers to offer a range of services for consumers with varying expectation, rather than simply offering a single service that aims to delivery maximum performance. This recognition will allow providers to use their resources more efficiently, leading to reduced costs and increased profitability. Such benefits can only be realised by adopting a conformance-centric view of quality across the SOA and by providing assessment services that operate effectively in such environments. This thesis proposes, develops and evaluates models and approaches that enable the achievement of this goal.
14

Model driven certification of Cloud service security based on continuous monitoring

Krotsiani, M. January 2016 (has links)
Cloud Computing technology offers an advanced approach for the provision of infrastructure, platform and software services without the need of extensive cost of owning, operating or maintaining the computational infrastructures required. However, despite being cost effective, this technology has raised concerns regarding the security, privacy and compliance of data or services offered through cloud systems. This is mainly due to the lack of transparency of services to the consumers, or due to the fact that service providers are unwilling to take full responsibility for the security of services that they offer through cloud systems, and accept liability for security breaches [18]. In such circumstances, there is a trust deficiency that needs to be addressed. The potential of certification as a means of addressing the lack of trust regarding the security of different types of services, including the cloud, has been widely recognised [149]. However, the recognition of this potential has not led to a wide adoption, as it was expected. The reason could be that certification has traditionally been carried out through standards and certification schemes (e.g., ISO27001 [149], ISO27002 [149] and Common Criteria [65]), which involve predominantly manual systems for security auditing, testing and inspection processes. Such processes tend to be lengthy and have a significant financial cost, which often prevents small technology vendors from adopting it [87]. In this thesis, we present an automated approach for cloud service certification, where the evidence is gathered through continuous monitoring. This approach can be used to: (a) define and execute automatically certification models, to continuously acquire and analyse evidence regarding the provision of services on cloud infrastructures through continuous monitoring; (b) use this evidence to assess whether the provision is compliant with required security properties; and (c) generate and manage digital certificates to confirm the compliance of services with specific security properties.
15

Systematic analysis and modelling of diagnostic errors in medicine

Guo, Shijing January 2016 (has links)
Diagnostic accuracy is an important index of the quality of health care service. Missed, wrong or delayed diagnosis has a direct effect on patient safety. Diagnostic errors have been discussed at length; however it still lacks a systemic research approach. This thesis takes the diagnostic process as a system and develops a systemic model of diagnostic errors by implementing system dynamics modelling combined with regression analysis. It aims to propose a better way of studying diagnostic errors as well as a deeper understanding of how factors affect the number of possible errors at each step of the diagnostic process and how factors contribute to patient outcomes in the end. It is executed following two parts: In the first part, a qualitative model is developed to demonstrate how errors can happen during the diagnostic process; in other words, the model illustrates the connections among key factors and dependent variables. It starts from discovering key factors of diagnostic errors, producing a hierarchical list of factors, and then illustrates interrelation loops that show how relevant factors are linked with errors. The qualitative model is based on the findings of a systematic literature review and further refined by experts’ reviews. In the second part, a quantitative model is developed to provide system behaviour simulations, which demonstrates the quantitative relations among factors and errors during the diagnostic process. Regression modelling analysis is used to estimate the quantitative relationships among multi factors and their dependent variables during the diagnostic phase of history taking and physical examinations. The regression models are further applied into quantitative system dynamics modelling ‘stock and flow diagrams’. The quantitative model traces error flows during the diagnostic process, and simulates how the change of one or more variables affects the diagnostic errors and patient outcomes over time. The change of the variables may reflect a change in demand from policy or a proposed external intervention. The results suggest the systemic model has the potential to help understand diagnostic errors, observe model behaviours, and provide risk-free simulation experiments for possible strategies.
16

Enhancing recommendations in specialist search through semantic-based techniques and multiple resources

Almuhaimeed, Abdullah January 2016 (has links)
Information resources abound on the Internet, but mining these resources is a non-trivial task. Such abundance has raised the need to enhance services provided to users, such as recommendations. The purpose of this work is to explore how better recommendations can be provided to specialists in specific domains such as bioinformatics by introducing semantic techniques that reason through different resources and using specialist search techniques. Such techniques exploit semantic relations and hidden associations that occur as a result of the information overlapping among various concepts in multiple bioinformatics resources such as ontologies, websites and corpora. Thus, this work introduces a new method that reasons over different bioinformatics resources and then discovers and exploits different relations and information that may not exist in the original resources. Such relations may be discovered as a consequence of the information overlapping, such as the sibling and semantic similarity relations, to enhance the accuracy of the recommendations provided on bioinformatics content (e.g. articles). In addition, this research introduces a set of semantic rules that are able to extract different semantic information and relations inferred among various bioinformatics resources. This project introduces these semantic-based methods as part of a recommendation service within a content-based system. Moreover, it uses specialists' interests to enhance the provided recommendations by employing a method that is collecting user data implicitly. Then, it represents the data as adaptive ontological user profiles for each user based on his/her preferences, which contributes to more accurate recommendations provided to each specialist in the field of bioinformatics.
17

Towards lightweight, low-latency network function virtualisation at the network edge

Cziva, Richard January 2018 (has links)
Communication networks are witnessing a dramatic growth in the number of connected mobile devices, sensors and the Internet of Everything (IoE) equipment, which have been estimated to exceed 50 billion by 2020, generating zettabytes of traffic each year. In addition, networks are stressed to serve the increased capabilities of the mobile devices (e.g., HD cameras) and to fulfil the users' desire for always-on, multimedia-oriented, and low-latency connectivity. To cope with these challenges, service providers are exploiting softwarised, cost-effective, and flexible service provisioning, known as Network Function Virtualisation (NFV). At the same time, future networks are aiming to push services to the edge of the network, to close physical proximity from the users, which has the potential to reduce end-to-end latency, while increasing the flexibility and agility of allocating resources. However, the heavy footprint of today's NFV platforms and their lack of dynamic, latency-optimal orchestration prevents them from being used at the edge of the network. In this thesis, the opportunities of bringing NFV to the network edge are identified. As a concrete solution, the thesis presents Glasgow Network Functions (GNF), a container-based NFV framework that allocates and dynamically orchestrates lightweight virtual network functions (vNFs) at the edge of the network, providing low-latency network services (e.g., security functions or content caches) to users. The thesis presents a powerful formalisation for the latency-optimal placement of edge vNFs and provides an exact solution using Integer Linear Programming, along with a placement scheduler that relies on Optimal Stopping Theory to efficiently re-calculate the placement following roaming users and temporal changes in latency characteristics. The results of this work demonstrate that GNF's real-world vNF examples can be created and hosted on a variety of hosting devices, including VMs from public clouds and low-cost edge devices typically found at the customer's premises. The results also show that GNF can carefully manage the placement of vNFs to provide low-latency guarantees, while minimising the number of vNF migrations required by the operators to keep the placement latency-optimal.
18

Scene understanding by robotic interactive perception

Khan, Aamir January 2018 (has links)
This thesis presents a novel and generic visual architecture for scene understanding by robotic interactive perception. This proposed visual architecture is fully integrated into autonomous systems performing object perception and manipulation tasks. The proposed visual architecture uses interaction with the scene, in order to improve scene understanding substantially over non-interactive models. Specifically, this thesis presents two experimental validations of an autonomous system interacting with the scene: Firstly, an autonomous gaze control model is investigated, where the vision sensor directs its gaze to satisfy a scene exploration task. Secondly, autonomous interactive perception is investigated, where objects in the scene are repositioned by robotic manipulation. The proposed visual architecture for scene understanding involving perception and manipulation tasks has four components: 1) A reliable vision system, 2) Camera-hand eye calibration to integrate the vision system into an autonomous robot’s kinematic frame chain, 3) A visual model performing perception tasks and providing required knowledge for interaction with scene, and finally, 4) A manipulation model which, using knowledge received from the perception model, chooses an appropriate action (from a set of simple actions) to satisfy a manipulation task. This thesis presents contributions for each of the aforementioned components. Firstly, a portable active binocular robot vision architecture that integrates a number of visual behaviours are presented. This active vision architecture has the ability to verge, localise, recognise and simultaneously identify multiple target object instances. The portability and functional accuracy of the proposed vision architecture is demonstrated by carrying out both qualitative and comparative analyses using different robot hardware configurations, feature extraction techniques and scene perspectives. Secondly, a camera and hand-eye calibration methodology for integrating an active binocular robot head within a dual-arm robot are described. For this purpose, the forward kinematic model of the active robot head is derived and the methodology for calibrating and integrating the robot head is described in detail. A rigid calibration methodology has been implemented to provide a closed-form hand-to-eye calibration chain and this has been extended with a mechanism to allow the camera external parameters to be updated dynamically for optimal 3D reconstruction to meet the requirements for robotic tasks such as grasping and manipulating rigid and deformable objects. It is shown from experimental results that the robot head achieves an overall accuracy of fewer than 0.3 millimetres while recovering the 3D structure of a scene. In addition, a comparative study between current RGB-D cameras and our active stereo head within two dual-arm robotic test-beds is reported that demonstrates the accuracy and portability of our proposed methodology. Thirdly, this thesis proposes a visual perception model for the task of category-wise objects sorting, based on Gaussian Process (GP) classification that is capable of recognising objects categories from point cloud data. In this approach, Fast Point Feature Histogram (FPFH) features are extracted from point clouds to describe the local 3D shape of objects and a Bag-of-Words coding method is used to obtain an object-level vocabulary representation. Multi-class Gaussian Process classification is employed to provide a probability estimate of the identity of the object and serves the key role of modelling perception confidence in the interactive perception cycle. The interaction stage is responsible for invoking the appropriate action skills as required to confirm the identity of an observed object with high confidence as a result of executing multiple perception-action cycles. The recognition accuracy of the proposed perception model has been validated based on simulation input data using both Support Vector Machine (SVM) and GP based multi-class classifiers. Results obtained during this investigation demonstrate that by using a GP-based classifier, it is possible to obtain true positive classification rates of up to 80\%. Experimental validation of the above semi-autonomous object sorting system shows that the proposed GP based interactive sorting approach outperforms random sorting by up to 30\% when applied to scenes comprising configurations of household objects. Finally, a fully autonomous visual architecture is presented that has been developed to accommodate manipulation skills for an autonomous system to interact with the scene by object manipulation. This proposed visual architecture is mainly made of two stages: 1) A perception stage, that is a modified version of the aforementioned visual interaction model, 2) An interaction stage, that performs a set of ad-hoc actions relying on the information received from the perception stage. More specifically, the interaction stage simply reasons over the information (class label and associated probabilistic confidence score) received from perception stage to choose one of the following two actions: 1) An object class has been identified with high confidence, so remove from the scene and place it in the designated basket/bin for that particular class. 2) An object class has been identified with less probabilistic confidence, since from observation and inspired from the human behaviour of inspecting doubtful objects, an action is chosen to further investigate that object in order to confirm the object’s identity by capturing more images from different views in isolation. The perception stage then processes these views, hence multiple perception-action/interaction cycles take place. From an application perspective, the task of autonomous category based objects sorting is performed and the experimental design for the task is described in detail.
19

Quantifying textual similarities across scientific research communities

Duffee, Boyd January 2018 (has links)
There are well-established approaches of text mining collections of documents and for understanding the network of citations between academic papers. Few studies have examined the textual content of the papers that constitute a citation network. A document corpus was obtained from the arXiv repository, selected from papers relating to the subject of Dark Matter and a citation network was created from the data held by NASA’s Astrophysics Data System on those papers, their citations and references. I use the Louvain community-finding algorithm on the Dark Matter network to identify groups of papers with a higher density of citations and compare the textual similarity between papers in the Dark Matter corpus using the Vector Space Model of document representation and the cosine similarity function. It was found that pairs of papers within a citation community have a higher similarity than they do with papers in other citation communities. This implies that content is associated with structure in scientific citation networks, which opens avenues for research on network communities for finding ground-truth using advanced Text Mining techniques, such as Topic Modelling. It was found that using the titles of papers in a citation network community was a good method for identifying the community. The power law exponent of the degree distribution was found to be, = 2.3, lower than results reported for other citation networks. The selection of papers based on a single subject, rather than based on a journal or category, is suggested as the reason for this lower value. It was also found that the degree pair correlation of the citation network classifies it as a disassortative network with a cut-off value at degree kc = 30.The textual similarity of documents decreases linearly with age over a 15 year timespan.
20

Model reduction techniques for probabilistic verification of Markov chains

Kamaleson, Nishanthan January 2018 (has links)
Probabilistic model checking is a quantitative verification technique that aims to verify the correctness of probabilistic systems. Nevertheless, it suffers from the so-called state space explosion problem. In this thesis, we propose two new model reduction techniques to improve the efficiency and scalability of verifying probabilistic systems, focusing on discrete-time Markov chains (DTMCs). In particular, our emphasis is on verifying quantitative properties that bound the time or cost of an execution. We also focus on methods that avoid the explicit construction of the full state space. We first present a finite-horizon variant of probabilistic bisimulation for DTMCs, which preserves a bounded fragment of PCTL. We also propose another model reduction technique that reduces what we call linear inductive DTMCs, a class of models whose state space grows linearly with respect to a parameter. All the techniques presented in this thesis were developed in the PRISM model checker. We demonstrate the effectiveness of our work by applying it to a selection of existing benchmark probabilistic models, showing that both of our two new approaches can provide significant reductions in model size and in some cases outperform the existing implementations of probabilistic verification in PRISM.

Page generated in 0.0774 seconds