• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 763
  • 170
  • 24
  • 21
  • 21
  • 21
  • 21
  • 21
  • 21
  • 6
  • 6
  • 4
  • 1
  • 1
  • Tagged with
  • 2872
  • 2872
  • 2521
  • 2129
  • 1312
  • 553
  • 527
  • 462
  • 443
  • 382
  • 373
  • 306
  • 262
  • 223
  • 208
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
441

Evaluation of trust in the Internet of Things : models, mechanisms and applications

Truong, N. B. January 2018 (has links)
In the blooming era of the Internet of Things (IoT), trust has become a vital factor for provisioning reliable smart services without human intervention by reducing risk in autonomous decision making. However, the merging of physical objects, cyber components and humans in the IoT infrastructure has introduced new concerns for the evaluation of trust. Consequently, a large number of trust-related challenges have been unsolved yet due to the ambiguity of the concept of trust and the variety of divergent trust models and management mechanisms in different IoT scenarios. In this PhD thesis, my ultimate goal is to propose an efficient and practical trust evaluation mechanisms for any two entities in the IoT. To achieve this goal, the first important objective is to augment the generic trust concept and provide a conceptual model of trust in order to come up with a comprehensive understanding of trust, influencing factors and possible Trust Indicators (TI) in the context of IoT. Following the catalyst, as the second objective, a trust model called REK comprised of the triad Reputation, Experience and Knowledge TIs is proposed which covers multi-dimensional aspects of trust by incorporating heterogeneous information from direct observation, personal experiences to global opinions. The mathematical models and evaluation mechanisms for the three TIs in the REK trust model are proposed. Knowledge TI is as “direct trust” rendering a trustor’s understanding of a trustee in respective scenarios that can be obtained based on limited available information about characteristics of the trustee, environment and the trustor’s perspective using a variety of techniques. Experience and Reputation TIs are originated from social features and extracted based on previous interactions among entities in IoT. The mathematical models and calculation mechanisms for the Experience and Reputation TIs also proposed leveraging sociological behaviours of humans in the real-world; and being inspired by the Google PageRank in the web-ranking area, respectively. The REK Trust Model is also applied in variety of IoT scenarios such as Mobile Crowd-Sensing (MCS), Car Sharing service, Data Sharing and Exchange platform in Smart Cities and in Vehicular Networks; and for empowering Blockchain-based systems. The feasibility and effectiveness of the REK model and associated evaluation mechanisms are proved not only by the theoretical analysis but also by real-world applications deployed in our ongoing TII and Wise-IoT projects.
442

Trust evaluation in the IoT environment

Jayasinghe, U. U. K. January 2018 (has links)
Along with the many benefits of IoT, its heterogeneity brings a new challenge to establish a trustworthy environment among the objects due to the absence of proper enforcement mechanisms. Further, it can be observed that often these encounters are addressed only concerning the security and privacy matters involved. However, such common network security measures are not adequate to preserve the integrity of information and services exchanged over the internet. Hence, they remain vulnerable to threats ranging from the risks of data management at the cyber-physical layers, to the potential discrimination at the social layer. Therefore, trust in IoT can be considered as a key property to enforce trust among objects to guarantee trustworthy services. Typically, trust revolves around assurance and confidence that people, data, entities, information, or processes will function or behave in expected ways. However, trust enforcement in an artificial society like IoT is far more difficult, as the things do not have an inherited judgmental ability to assess risks and other influencing factors to evaluate trust as humans do. Hence, it is important to quantify the perception of trust such that it can be understood by the artificial agents. In computer science, trust is considered as a computational value depicted by a relationship between trustor and trustee, described in a specific context, measured by trust metrics, and evaluated by a mechanism. Several mechanisms about trust evaluation can be found in the literature. Among them, most of the work has deviated towards security and privacy issues instead of considering the universal meaning of trust and its dynamic nature. Furthermore, they lack a proper trust evaluation model and management platform that addresses all aspects of trust establishment. Hence, it is almost impossible to bring all these solutions to one place and develop a common platform that resolves end-to-end trust issues in a digital environment. Therefore, this thesis takes an attempt to fill these spaces through the following research work. First, this work proposes concrete definitions to formally identify trust as a computational concept and its characteristics. Next, a well-defined trust evaluation model is proposed to identify, evaluate and create trust relationships among objects for calculating trust. Then a trust management platform is presented identifying the major tasks of trust enforcement process including trust data collection, trust data management, trust information analysis, dissemination of trust information and trust information lifecycle management. Next, the thesis proposes several approaches to assess trust attributes and thereby the trust metrics of the above model for trust evaluation. Further, to minimize dependencies with human interactions in evaluating trust, an adaptive trust evaluation model is presented based on the machine learning techniques. From a standardization point of view, the scope of the current standards on network security and cybersecurity needs to be expanded to take trust issues into consideration. Hence, this thesis has provided several inputs towards standardization on trust, including a computational definition of trust, a trust evaluation model targeting both object and data trust, and platform to manage the trust evaluation process.
443

Machine learning approaches and web-based system to the application of disease modifying therapy for sickle cell

Khalaf, M. I. January 2018 (has links)
Sickle cell disease (SCD) is a common serious genetic disease, which has a severe impact due to red blood cell (RBCs) abnormality. According to the World Health Organisation, 7 million newborn babies each year suffer either from the congenital anomaly or from an inherited disease, primarily from thalassemia and sickle cell disease. In the case of SCD, recent research has shown the beneficial effects of a drug called hydroxyurea/hydroxycarbamide in modifying the disease phenotype. The clinical management of this disease-modifying therapy is difficult and time consuming for clinical staff. This includes finding an optimal classifier that can help to solve the issues with missing values, multi-class datasets, and features selection. For the classification and discriminant analysis of SCD datasets, 7 classifiers based on machine learning models are selected representing linear and non-linear methods. After running these classifiers with a single model, the results revealed that a single classifier has provided us with effective outcomes in terms of the classification performance evaluation metric. In order to produce such an optimal outcome, this research proposed and designed combined classifiers (ensemble classifiers) among the neural network’s models, the random forest classifier, and the K-nearest neighbour classifier. In this aspect, combining the levenberg-marquardt algorithm, the voted perceptron classifier, the radial basis neural classifier, and random forest classifier obtain the highest rate of performance and accuracy. This ensemble classifier receives better results during the training set and testing set process. Recent technology advances based on smart devices have improved the medical facilities and become increasingly popular in association with real-time health monitoring and remote/personal health-care. The web-based system developed under the supervision of the haematology specialist at the Alder Hey Children’s Hospital in order to produce such an effective and useful system for both patients and clinicians. To sum up, the simulation experiment concludes that using machine learning and the web-based system platforms represents an alternative procedure that could assist healthcare professionals, particularly for the specialist nurse and junior doctor to improve the quality of care with sickle cell disorder.
444

Curiosity driven search experiences

Millan Cifuentes, Juan D. January 2017 (has links)
Casual-Leisure Search describes any behaviour that allows people to express and satisfy hedonistic needs rather than information needs as part of the information-seeking process. For example, individuals who search their social media universe for hours after a long day at work may do so out of curiosity, to relax or for fun (e.g. exploring for the experience). Studies have shown that classical information seeking (IS) and interactive information retrieval models (IIR) have failed to represent them because they were created observing people in work related scenarios, and assuming that search is always a rational decision making process and with an extrinsic utilitarian value. The research described in this PhD work investigates IIR from the perspective of the psychological curiosity and leisure information seeking behaviour. Traditional search engines focus the user experience on satisfying users with topically relevant information (i.e. quick lookup search and then moving on), but they are limited supporting the discovery of unknown information because they fail to entice and engage users exploration as proxy to seek enjoyment both in leisure and work scenarios. The research described increases understanding of the role that curiosity plays in IIR and investigates the merits of incorporating the characteristics and function of human curiosity in the design of IIR systems. The research is grounded by the theoretical understanding of how human curiosity works. A review of appropriate psychological curiosity literature offers a means to critique existing IIR tools and a basis from which to start designing novel curiosity driven search tools. In the first experimental work, this research compared IIR behaviour between a standard query response paradigm and a curiosity driven search map prototype using social media content, and attempts to learn lessons from the behaviour that people show in everyday casual-leisure search scenarios. In the second experiment, this research contrast IIR behaviour between standard query-response paradigm and a curious adaptation of query-response paradigm using search notifications or recommendations for news reading in a social media leisure search scenario. The tools are evaluated to determine the usefulness of incorporating curiosity in the design of IIR systems, to learn about the effect in user engagement, how users exploration is increase when motivated by a hedonistic need, and then elaborate a set of design recommendations to enhance the search experience in leisure scenarios.
445

Robust hand pose recognition from stereoscopic capture

Basaru, R. R. January 2018 (has links)
Hand pose is emerging as an important interface for human-computer interaction. The problem of hand pose estimation from passive stereo inputs has received less attention in the literature compared to active depth sensors. This thesis seeks to address this gap by presenting a data-driven method to estimate a hand pose from a stereoscopic camera input, with experimental results comparable to more expensive active depth sensors. The frameworks presented in this thesis are based on a two camera stereo rig capture as it yields a simpler and cheaper set-up and calibration. Three frameworks are presented, describing the sequential steps taken to solve the problem of depth and pose estimation of hands. The first is a data-driven method to estimate a high quality depth map of a hand from a stereoscopic camera input by introducing a novel regression framework. The method first computes disparity using a robust stereo matching technique. Then, it applies a machine learning technique based on Random Forest to learn the mapping between the estimated disparity and depth given ground truth data. We introduce Eigen Leaf Node Features (ELNFs) that perform feature selection at the leaf nodes in each tree to identify features that are most discriminative for depth regression. The system provides a robust method for generating a depth image with an inexpensive stereo camera. The second framework improves on the task of hand depth estimation from stereo capture by introducing a novel superpixel-based regression framework that takes advantage of the smoothness of the depth surface of the hand. To this end, it introduces Conditional Regressive Random Forest (CRRF), a method that combines a Conditional Random Field (CRF) and a Regressive Random Forest (RRF) to model the mapping from a stereo RGB image pair to a depth image. The RRF provides a unary term that adaptively selects different stereo-matching measures as it implicitly determines matching pixels in a coarse-to-fine manner. While the RRF makes depth prediction for each super-pixel independently, the CRF unifies the prediction of depth by modeling pair-wise interactions between adjacent superpixels. The final framework introduces a stochastic approach to propose potential depth solutions to the observed stereo capture and evaluate these proposals using two convolutional neural networks (CNNs). The first CNN, configured in a Siamese network architecture, evaluates how consistent the proposed depth solution is to the observed stereo capture. The second CNN estimates a hand pose given the proposed depth. Unlike sequential approaches that reconstruct pose from a known depth, this method jointly optimizes the hand pose and depth estimation through Markov-chain Monte Carlo (MCMC) sampling. This way, pose estimation can correct for errors in depth estimation, and vice versa. Experimental results using an inexpensive stereo camera show that the proposed system measures pose more accurately than competing methods. More importantly, it presents the possibility of pose recovery from stereo capture that is on par with depth based pose recovery.
446

Exploring the nature of cognitive resilience strategies

Day, J. D. January 2018 (has links)
Where improving the safety or performance of a system, there is a tendency to focus on negative aspects surrounding human performance or interaction: errors, threats, past incidents or identified issues and flaws. This does not, however, tell the whole story. Users frequently deploy a variety of resilient interventions, devising and implementing strategies to improve performance and mitigate threats such as errorparticularly during complex or challenging circumstances. In so doing, users can and do make an active, positive contribution to the wider resilience of a system. To date, the subject of how individual actors within a system leverage such resilience strategies to improve the functioning of said system is a topic that has received only limited direct investigation. An initial study was undertaken as a probing investigation to test the notion of user-configured cues as a means to facilitate individual resilience. The insights from this study challenged an existing foundational categorisation scheme, which we then sought to expand and refine in collaboration with its original authors, to better represent and articulate 10 different types of resilience strategy. As a means to broaden our real-world pool of strategy accounts, a diary study was then conducted, the resulting data being used to both inform and validate a new iteration of the scheme. Stemming from challenges of the applicability of the scheme to complex resilience cases, we introduced the notion of a new type of compound strategy, and developed a framework to support their analysis by deconstructing them to examine their motivational and functional components. A final controlled laboratory study was undertaken to apply our insights. The resultant refined categorisation scheme and conceptual framework enrich our understanding of the phenomenon of user or individual resilience and could potentially be leveraged to inform and support the design of future technical and sociotechnical systems.
447

Augmenting communication technologies with non-primary sensory modalities

Tewell, J. R. January 2018 (has links)
Humans combine their senses to enhance the world around them. While computers have evolved to reflect these sensory demands, only the primary senses of vision and audition (and to an extent, touch) are used in modern communication. This thesis investigated how additional information, such as emotion and navigational assistance, might be communicated using technology-based implementations of sensory displays that output the non-primary modalities of smell, vibrotactile touch, and thermo-touch. This thesis explored using a portable atomiser sprayer to deliver emotional information via smell to mobile phone users, a ring-shaped device worn on the finger to display emotional information using vibration and colours, and an array of thermoelectric coolers worn on the arm to create temperature sensations. Additionally, this thesis explored two methods of signalling temperature using the thermal implementation, and finally, used it in a controlled study to augment the perceived emotion of text messages using temperature. There were challenges with using some of these implementations to display information. Smells produced with the scent technology were ambiguous and highly cognitive, and poor delivery to the user produced undesirable cross-adaption effects when smells lingered and mixed in the environment. The device used to communicate vibrotactile and colour lighting cues neutralized emotions in text messages. Furthermore, temperature pattern discrimination using the thermal implementation was difficult due to non-linear interaction effects that occurred on the skin’s surface, as well as latency resulting from the thermal neurological pathway and the technology used to heat and cool the skin. However, the thermal implementation enabled more accurate user discrimination between thermal signals than what a single stimulator design provided. Furthermore, the utility of continuous thermal feedback, in the context of spatial navigation, was demonstrated, which improved user performance compared to when the user was not presented with any thermal information. Finally, temperature was demonstrated to elicit arousal reactions across subjects using the thermal implementation, and could augment the arousal of text messages, especially when the content of the message was strongly neutral. However, no similar statistical significance was observed with valence, demonstrating the complex implications of using thermal cues to convey emotional information.
448

A framework for hierarchical time-oriented data visualisation

Henkin, Rafael January 2018 (has links)
The paradigm of exploratory data analysis advocates the use of multiple perspectives to formulate hypotheses on the data. This thesis presents a framework to support it through the use of interactive hierarchical visualisations for the exploration of temporal data. The research that leads to the framework involves investigating what are the conventional interactive techniques for temporal data, how they can be combined with hierarchical methods and which are the conceptual transformations that enable navigating between multiple perspectives. The aim of the research is to facilitate the design of interactive visualisations based on the use of granularities or units of time, which hide or reveal processes at various scales and is a key aspect of temporal data. Characteristics of granularities are suitable for hierarchical visualisations as evidenced in the literature. However, current conceptual models and frameworks lack means to incorporate characteristics of granularities as an integral part of visualisation design. The research addresses this by combining features of hierarchical and time-oriented visualisations and enabling systematic re-configuration of visualisations. Current techniques for visualising temporal data are analysed and specified at previously unsupported levels by breaking down visual encodings into decomposed layers, which can be arranged and recombined through hierarchical composition methods. Afterwards, the transformations of the properties of temporal data are defined by drawing from the interactions found in the literature and formalising them as a set of conceptual operators. The complete framework is introduced by combining the different components that form it and enable specifying visual encodings, hierarchical compositions and the temporal transformations. A case study then demonstrates how the framework can be used and its benefits for evaluating analysis strategies in visual exploration.
449

Ontology driven clinical decision support for early diagnostic recommendations

Mannamparambil Chandrasekharan, Gopikrishnan January 2018 (has links)
Diagnostic error is a significant problem in medicine and a major cause of concern for patients and clinicians and is associated with moderate to severe harm to patients. Diagnostic errors are a primary cause of clinical negligence and can result in malpractice claims. Cognitive errors caused by biases such as premature closure and confirmation bias have been identified as major cause of diagnostic error. Researchers have identified several strategies to reduce diagnostic error arising from cognitive factors. This includes considering alternatives, reducing reliance on memory, providing access to clear and well-organized information. Clinical Decision Support Systems (CDSSs) have been shown to reduce diagnostic errors. Clinical guidelines improve consistency of care and can potentially improve healthcare efficiency. They can alert clinicians to diagnostic tests and procedures that have the greatest evidence and provide the greatest benefit. Clinical guidelines can be used to streamline clinical decision making and provide the knowledge base for guideline based CDSSs and clinical alert systems. Clinical guidelines can potentially improve diagnostic decision making by improving information gathering. Argumentation is an emerging area for dealing with unstructured evidence in domains such as healthcare that are characterized by uncertainty. The knowledge needed to support decision making is expressed in the form of arguments. Argumentation has certain advantages over other decision support reasoning methods. This includes the ability to function with incomplete information, the ability to capture domain knowledge in an easy manner, using non-monotonic logic to support defeasible reasoning and providing recommendations in a manner that can be easily explained to clinicians. Argumentation is therefore a suitable method for generating early diagnostic recommendations. Argumentation-based CDSSs have been developed in a wide variety of clinical domains. However, the impact of an argumentation-based diagnostic Clinical Decision Support System (CDSS) has not been evaluated yet. The first part of this thesis evaluates the impact of guideline recommendations and an argumentation-based diagnostic CDSS on clinician information gathering and diagnostic decision making. In addition, the impact of guideline recommendations on management decision making was evaluated. The study found that argumentation is a viable method for generating diagnostic recommendations that can potentially help reduce diagnostic error. The study showed that guideline recommendations do have a positive impact on information gathering of optometrists and can potentially help optometrists in asking the right questions and performing tests as per current standards of care. Guideline recommendations were found to have a positive impact on management decision making. The CDSS is dependent on quality of data that is entered into the system. Faulty interpretation of data can lead the clinician to enter wrong data and cause the CDSS to provide wrong recommendations. Current generation argumentation-based CDSSs and other diagnostic decision support systems have problems with semantic interoperability that prevents them from using data from the web. The clinician and CDSS is limited to information collected during a clinical encounter and cannot access information on the web that could be relevant to a patient. This is due to the distributed nature of medical information and lack of semantic interoperability between healthcare systems. Current argumentation-based decision support applications require specialized tools for modelling and execution and this prevents widespread use and adoption of these tools especially when these tools require additional training and licensing arrangements. Semantic web and linked data technologies have been developed to overcome problems with semantic interoperability on the web. Ontology-based diagnostic CDSS applications have been developed using semantic web technology to overcome problems with semantic interoperability of healthcare data in decision support applications. However, these models have problems with expressiveness, requiring specialized software and algorithms for generating diagnostic recommendations. The second part of this thesis describes the development of an argumentation-based ontology driven diagnostic model and CDSS that can execute this model to generate ranked diagnostic recommendations. This novel model called the Disease-Symptom Model combines strengths of argumentation with strengths of semantic web technology. The model allows the domain expert to model arguments favouring and negating a diagnosis using OWL/RDF language. The model uses a simple weighting scheme that represents the degree of support of each argument within the model. The model uses SPARQL to sum weights and produce a ranked diagnostic recommendation. The model can provide justifications for each recommendation in a manner that clinicians can easily understand. CDSS prototypes that can execute this ontology model to generate diagnostic recommendations were developed. The decision support prototypes demonstrated the ability to use a wide variety of data and access remote data sources using linked data technologies to generate recommendations. The thesis was able to demonstrate the development of an argumentation-based ontology driven diagnostic decision support model and decision support system that can integrate information from a variety of sources to generate diagnostic recommendations. This decision support application was developed without the use of specialized software and tools for modelling and execution, while using a simple modelling method. The third part of this thesis details evaluation of the Disease-Symptom model across all stages of a clinical encounter by comparing the performance of the model with clinicians. The evaluation showed that the Disease-Symptom Model can provide a ranked diagnostic recommendation in early stages of the clinical encounter that is comparable to clinicians. The diagnostic performance can be improved in the early stages using linked data technologies to incorporate more information into the decision making. With limited information, depending on the type of case, the performance of the Disease-Symptom Model will vary. As more information is collected during the clinical encounter the decision support application can provide recommendations that is comparable to clinicians recruited for the study. The evaluation showed that even with a simple weighting and summation method used in the Disease- Symptom Model the diagnostic ranking was comparable to dentists. With limited information in the early stages of the clinical encounter the Disease-Symptom Model was able to provide an accurately ranked diagnostic recommendation validating the model and methods used in this thesis.
450

Removing and restoring control flow with the Value State Dependence Graph

Stanier, James January 2012 (has links)
This thesis studies the practicality of compiling with only data flow information. Specifically, we focus on the challenges that arise when using the Value State Dependence Graph (VSDG) as an intermediate representation (IR). We perform a detailed survey of IRs in the literature in order to discover trends over time, and we classify them by their features in a taxonomy. We see how the VSDG fits into the IR landscape, and look at the divide between academia and the 'real world' in terms of compiler technology. Since most data flow IRs cannot be constructed for irreducible programs, we perform an empirical study of irreducibility in current versions of open source software, and then compare them with older versions of the same software. We also study machine-generated C code from a variety of different software tools. We show that irreducibility is no longer a problem, and is becoming less so with time. We then address the problem of constructing the VSDG. Since previous approaches in the literature have been poorly documented or ignored altogether, we give our approach to constructing the VSDG from a common IR: the Control Flow Graph. We show how our approach is independent of the source and target language, how it is able to handle unstructured control flow, and how it is able to transform irreducible programs on the fly. Once the VSDG is constructed, we implement Lawrence's proceduralisation algorithm in order to encode an evaluation strategy whilst translating the program into a parallel representation: the Program Dependence Graph. From here, we implement scheduling and then code generation using the LLVM compiler. We compare our compiler framework against several existing compilers, and show how removing control flow with the VSDG and then restoring it later can produce high quality code. We also examine specific situations where the VSDG can put pressure on existing code generators. Our results show that the VSDG represents a radically different, yet practical, approach to compilation.

Page generated in 0.0922 seconds