• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 10
  • 2
  • 1
  • Tagged with
  • 1325
  • 1313
  • 1312
  • 1312
  • 1312
  • 192
  • 164
  • 156
  • 129
  • 99
  • 93
  • 79
  • 52
  • 51
  • 51
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
271

A framework for hierarchical time-oriented data visualisation

Henkin, Rafael January 2018 (has links)
The paradigm of exploratory data analysis advocates the use of multiple perspectives to formulate hypotheses on the data. This thesis presents a framework to support it through the use of interactive hierarchical visualisations for the exploration of temporal data. The research that leads to the framework involves investigating what are the conventional interactive techniques for temporal data, how they can be combined with hierarchical methods and which are the conceptual transformations that enable navigating between multiple perspectives. The aim of the research is to facilitate the design of interactive visualisations based on the use of granularities or units of time, which hide or reveal processes at various scales and is a key aspect of temporal data. Characteristics of granularities are suitable for hierarchical visualisations as evidenced in the literature. However, current conceptual models and frameworks lack means to incorporate characteristics of granularities as an integral part of visualisation design. The research addresses this by combining features of hierarchical and time-oriented visualisations and enabling systematic re-configuration of visualisations. Current techniques for visualising temporal data are analysed and specified at previously unsupported levels by breaking down visual encodings into decomposed layers, which can be arranged and recombined through hierarchical composition methods. Afterwards, the transformations of the properties of temporal data are defined by drawing from the interactions found in the literature and formalising them as a set of conceptual operators. The complete framework is introduced by combining the different components that form it and enable specifying visual encodings, hierarchical compositions and the temporal transformations. A case study then demonstrates how the framework can be used and its benefits for evaluating analysis strategies in visual exploration.
272

Ontology driven clinical decision support for early diagnostic recommendations

Mannamparambil Chandrasekharan, Gopikrishnan January 2018 (has links)
Diagnostic error is a significant problem in medicine and a major cause of concern for patients and clinicians and is associated with moderate to severe harm to patients. Diagnostic errors are a primary cause of clinical negligence and can result in malpractice claims. Cognitive errors caused by biases such as premature closure and confirmation bias have been identified as major cause of diagnostic error. Researchers have identified several strategies to reduce diagnostic error arising from cognitive factors. This includes considering alternatives, reducing reliance on memory, providing access to clear and well-organized information. Clinical Decision Support Systems (CDSSs) have been shown to reduce diagnostic errors. Clinical guidelines improve consistency of care and can potentially improve healthcare efficiency. They can alert clinicians to diagnostic tests and procedures that have the greatest evidence and provide the greatest benefit. Clinical guidelines can be used to streamline clinical decision making and provide the knowledge base for guideline based CDSSs and clinical alert systems. Clinical guidelines can potentially improve diagnostic decision making by improving information gathering. Argumentation is an emerging area for dealing with unstructured evidence in domains such as healthcare that are characterized by uncertainty. The knowledge needed to support decision making is expressed in the form of arguments. Argumentation has certain advantages over other decision support reasoning methods. This includes the ability to function with incomplete information, the ability to capture domain knowledge in an easy manner, using non-monotonic logic to support defeasible reasoning and providing recommendations in a manner that can be easily explained to clinicians. Argumentation is therefore a suitable method for generating early diagnostic recommendations. Argumentation-based CDSSs have been developed in a wide variety of clinical domains. However, the impact of an argumentation-based diagnostic Clinical Decision Support System (CDSS) has not been evaluated yet. The first part of this thesis evaluates the impact of guideline recommendations and an argumentation-based diagnostic CDSS on clinician information gathering and diagnostic decision making. In addition, the impact of guideline recommendations on management decision making was evaluated. The study found that argumentation is a viable method for generating diagnostic recommendations that can potentially help reduce diagnostic error. The study showed that guideline recommendations do have a positive impact on information gathering of optometrists and can potentially help optometrists in asking the right questions and performing tests as per current standards of care. Guideline recommendations were found to have a positive impact on management decision making. The CDSS is dependent on quality of data that is entered into the system. Faulty interpretation of data can lead the clinician to enter wrong data and cause the CDSS to provide wrong recommendations. Current generation argumentation-based CDSSs and other diagnostic decision support systems have problems with semantic interoperability that prevents them from using data from the web. The clinician and CDSS is limited to information collected during a clinical encounter and cannot access information on the web that could be relevant to a patient. This is due to the distributed nature of medical information and lack of semantic interoperability between healthcare systems. Current argumentation-based decision support applications require specialized tools for modelling and execution and this prevents widespread use and adoption of these tools especially when these tools require additional training and licensing arrangements. Semantic web and linked data technologies have been developed to overcome problems with semantic interoperability on the web. Ontology-based diagnostic CDSS applications have been developed using semantic web technology to overcome problems with semantic interoperability of healthcare data in decision support applications. However, these models have problems with expressiveness, requiring specialized software and algorithms for generating diagnostic recommendations. The second part of this thesis describes the development of an argumentation-based ontology driven diagnostic model and CDSS that can execute this model to generate ranked diagnostic recommendations. This novel model called the Disease-Symptom Model combines strengths of argumentation with strengths of semantic web technology. The model allows the domain expert to model arguments favouring and negating a diagnosis using OWL/RDF language. The model uses a simple weighting scheme that represents the degree of support of each argument within the model. The model uses SPARQL to sum weights and produce a ranked diagnostic recommendation. The model can provide justifications for each recommendation in a manner that clinicians can easily understand. CDSS prototypes that can execute this ontology model to generate diagnostic recommendations were developed. The decision support prototypes demonstrated the ability to use a wide variety of data and access remote data sources using linked data technologies to generate recommendations. The thesis was able to demonstrate the development of an argumentation-based ontology driven diagnostic decision support model and decision support system that can integrate information from a variety of sources to generate diagnostic recommendations. This decision support application was developed without the use of specialized software and tools for modelling and execution, while using a simple modelling method. The third part of this thesis details evaluation of the Disease-Symptom model across all stages of a clinical encounter by comparing the performance of the model with clinicians. The evaluation showed that the Disease-Symptom Model can provide a ranked diagnostic recommendation in early stages of the clinical encounter that is comparable to clinicians. The diagnostic performance can be improved in the early stages using linked data technologies to incorporate more information into the decision making. With limited information, depending on the type of case, the performance of the Disease-Symptom Model will vary. As more information is collected during the clinical encounter the decision support application can provide recommendations that is comparable to clinicians recruited for the study. The evaluation showed that even with a simple weighting and summation method used in the Disease- Symptom Model the diagnostic ranking was comparable to dentists. With limited information in the early stages of the clinical encounter the Disease-Symptom Model was able to provide an accurately ranked diagnostic recommendation validating the model and methods used in this thesis.
273

A linguistic approach to concurrent, distributed, and adaptive programming across heterogeneous platforms

Harvey, Paul January 2015 (has links)
Two major trends in computing hardware during the last decade have been an increase in the number of processing cores found in individual computer hardware platforms and an ubiquity of distributed, heterogeneous systems. Together, these changes can improve not only the performance of a range of applications, but the types of applications that can be created. Despite the advances in hardware technology, advances in programming of such systems has not kept pace. Traditional concurrent programming has always been challenging, and is only set to be come more so as the level of hardware concurrency increases. The different hardware platforms which make up heterogeneous systems come with domain-specific programming models, which are not designed to interact, or take into account the different resource-constraints present across different hardware devices, motivating a need for runtime reconfiguration or adaptation. This dissertation investigates the actor model of computation as an appropriate abstraction to address the issues present in programming concurrent, distributed, and adaptive applications across different scales and types of computing hardware. Given the limitations of other approaches, this dissertation describes a new actor-based programming language (Ensemble) and its runtime to address these challenges. The goal of this language is to enable non-specialist programmers to take advantage of parallel, distributed, and adaptive programming without the programmer requiring in-depth knowledge of hardware architectures or software frameworks. There is also a description of the design and implementation of the runtime system which executes Ensemble applications across a range of heterogeneous platforms. To show the suitability of the actor-based abstraction in creating applications for such systems, the language and runtime were evaluated in terms of linguistic complexity and performance. These evaluations covered programming embedded, concurrent, distributed, and adaptable applications, as well as combinations thereof. The results show that the actor provides an objectively simple way to program such systems without sacrificing performance.
274

GUMSMP : a scalable parallel Haskell implementation

Aljabri, Malak Saleh January 2015 (has links)
The most widely available high performance platforms today are hierarchical, with shared memory leaves, e.g. clusters of multi-cores, or NUMA with multiple regions. The Glasgow Haskell Compiler (GHC) provides a number of parallel Haskell implementations targeting different parallel architectures. In particular, GHC-SMP supports shared memory architectures, and GHC-GUM supports distributed memory machines. Both implementations use different, but related, runtime system (RTS) mechanisms and achieve good performance. A specialised RTS for the ubiquitous hierarchical architectures is lacking. This thesis presents the design, implementation, and evaluation of a new parallel Haskell RTS, GUMSMP, that combines shared and distributed memory mechanisms to exploit hierarchical architectures more effectively. The design evaluates a variety of design choices and aims to efficiently combine scalable distributed memory parallelism, using a virtual shared heap over a hierarchical architecture, with low-overhead shared memory parallelism on shared memory nodes. Key design objectives in realising this system are to prefer local work, and to exploit mostly passive load distribution with pre-fetching. Systematic performance evaluation shows that the automatic hierarchical load distribution policies must be carefully tuned to obtain good performance. We investigate the impact of several policies including work pre-fetching, favouring inter-node work distribution, and spark segregation with different export and select policies. We present the performance results for GUMSMP, demonstrating good scalability for a set of benchmarks on up to 300 cores. Moreover, our policies provide performance improvements of up to a factor of 1.5 compared to GHC- GUM. The thesis provides a performance evaluation of distributed and shared heap implementations of parallel Haskell on a state-of-the-art physical shared memory NUMA machine. The evaluation exposes bottlenecks in memory management, which limit scalability beyond 25 cores. We demonstrate that GUMSMP, that combines both distributed and shared heap abstractions, consistently outper- forms the shared memory GHC-SMP on seven benchmarks by a factor of 3.3 on average. Specifically, we show that the best results are obtained when shar- ing memory only within a single NUMA region, and using distributed memory system abstractions across the regions.
275

Pressure as a non-dominant hand input modality for bimanual interaction techniques on touchscreen tablets

McLachlan, Ross David January 2015 (has links)
Touchscreen tablet devices present an interesting challenge to interaction designers: they are not quite handheld like their smartphone cousins, though their form factor affords usage away from the desktop and other surfaces, requires a user to support a larger weight and navigate more screen space. Thus, the repertoire of touch input techniques is often reduced to those performable with one hand. Previous studies have suggested there are bimanual interaction techniques that offer both manual and cognitive benefits over equivalent unimanual techniques and that pressure is useful as a primary input modality on mobile devices and as an augmentation to finger/stylus input on touchscreens. However, there has been no research on the use of pressure as a modality to expand the range of bimanual input techniques on tablet devices. The first two experiments investigated bimanual scrolling on tablet devices, based on the premise that the control of scrolling speed and vertical scrolling direction could be thought of as separate tasks and that the current status quo of combining both into a single one- handed (unimanual) gesture on a touchscreen or on physical dial can be improved upon. Four bimanual scrolling techniques were compared to two status quo unimanual scrolling techniques in a controlled linear targeting task. The Dial and Slider bimanual technique was superior to the others in terms of Movement Time and the Dial and Pressure bimanual technique was superior in terms of Subjective Workload, suggesting that the bimanual scrolling techniques are better than the status quo unimanual techniques in terms of both performance and preference. The same interaction techniques were then evaluated using a photo browsing task that was chosen to resemble the way people browse their music collections when they are unsure about what they are looking for. These studies demonstrated that pressure is a more effective auxiliary modality than a touch slider in the context of bimanual scrolling techniques. These studies also demonstrated that the bimanual techniques did not provide any concrete benefits over the Unimanual touch scrolling technique, which is the status quo scrolling technique on commercially available touchscreen tablets and smartphones, in the context of an image browsing task. A novel investigation of pressure input was presented where it was characterised as a transient modality, one that has a natural inverse, bounce-back and a state that only persists during interaction. Two studies were carried out investigating the precision of applied pressure as part of a bimanual interaction, where the selection event is triggered by the dominant hand on the touchscreen (using existing touchscreen input gestures) with the goal of study- ing pressure as a functional primitive, without implying any particular application. Two aspects of pressure input were studied – pressure Targeting and Maintaining pressure over time. The results demonstrated that, using a combination of non-dominant hand pressure and dominant-hand touchscreen taps, overall pressure targeting accuracy was high (93.07%). For more complicated dominant-hand input techniques (swipe, pinch and rotate gestures), pressure targeting accuracy was still high (86%). The results demonstrated that participants were able to achieve high levels of pressure accuracy (90.3%) using DH swipe gestures (the simplest gesture in the study) suggesting that the ability to perform a simultaneous combination of pressure and touchscreen gesture input depends on the complexity of the dominant hand action involved. This thesis provides the first detailed study of the use of non-dominant hand pressure input to enable bimanual interaction techniques for tablet devices. It explores the use of pressure as a modality that can expand the range of available bimanual input techniques while the user is seated and comfortably holding the device and offers designers guidelines for including pressure as a non-dominant hand input modality for bimanual interaction techniques, in a way that supplements existing dominant-hand action.
276

Efficient hand orientation and pose estimation for uncalibrated cameras

Asad, M. January 2017 (has links)
We proposed a staged probabilistic regression method that is capable of learning well from a number of variations within a dataset. The proposed method is based on multi layered Random Forest, where the first layer consisted of a single marginalization weights regressor and second layer contained an ensemble of expert learners. The expert learners are trained in stages, where each stage involved training and adding an expert learner to the intermediate model. After every stage, the intermediate model was evaluated to reveal a latent variable space defining a subset that the model had difficulty in learning from. This subset was used to train the next expert regressor. The posterior probabilities for each training sample were extracted from each expert regressors. These posterior probabilities were then used along with a Kullback-Leibler divergence-based optimization method to estimate the marginalization weights for each regressor. A marginalization weights regressor was trained using CDF and the estimated marginalization weights. We showed the extension of our work for simultaneous hand orientation and pose inference. The proposed method outperformed the state-of-the-art for marginalization of multi-layered Random Forest and hand orientation inference. Furthermore, we show that a method which simultaneously learns from hand orientation and pose outperforms pose classification as it is able to better understand the variations in pose induced due to viewpoint changes.
277

Energy efficient and secure wireless communications for wireless sensor networks

Gong, P. January 2017 (has links)
This dissertation considers wireless sensor networks (WSNs) operating in severe environments where energy efficiency and security are important factors. This main aim of this research is to improve routing protocols in WSNs to ensure efficient energy usage and protect against attacks (especially energy draining attacks) targeting WSNs. An enhancement of the existing AODV (Ad hoc On-Demand Distance Vector) routing protocol for energy efficiency, called AODV-Energy Harvesting Aware (AODVEHA), is proposed and evaluated. It not only inherits the advantages of AODV which are well suited to ad hoc networks, but also makes use of the energy harvesting capability of sensor nodes in the network. In addition to the investigation of energy efficiency, another routing protocol called Secure and Energy Aware Routing Protocol (ETARP) designed for energy efficiency and security of WSNs is presented. The key part of the ETARP is route selection based on utility theory, which is a novel approach to simultaneously factor energy efficiency and trustworthiness of routes in the routing protocol. Finally, this dissertation proposes a routing protocol to protect against a specific type of resource depletion attack called Vampire attacks. The proposed resource-conserving protection against energy draining (RCPED) protocol is independent of cryptographic methods, which brings advantage of less energy cost and hardware requirement. RCPED collaborates with existing routing protocols, detects abnormal sign of Vampire attacks and determines the possible attackers. Then routes are discovered and selected on the basis of maximum priority, where the priority that reflects the energy efficiency and safety level of route is calculated by means of Analytic Hierarchy Process (AHP). The proposed analytic model for the aforementioned routing solutions are verified by simulations. Simulations results validate the improvements of proposed routing approaches in terms of better energy efficiency and guarantee of security.
278

Using directional change for information extraction in financial market data

Tao, Ran January 2018 (has links)
Directional change (DC) is a new concept for summarizing market dynamics. Instead of sampling the financial market at fixed intervals as in the traditional time series analysis, by contrast, DC is data-driven: the price change itself dictates when a price is recorded. DC provides us with a complementary way to extract information from data. The data sampled at irregular time intervals in DC allows us to observe features that may not be recognized under time series. In this thesis we propose our new method for the summarizing of financial markets through the use of the DC framework. Firstly, we define what is the vocabulary needed for a DC market summary. The vocabulary includes DC indicators and metrics. DC indicators are used to build a DC market summary for a single market. DC metrics help us quantitatively measure the differences between two markets under the directional change method. We demonstrate how such metrics could quantitatively measure the differences between different DC market summaries. Then, with real financial market data studied using DC, we aim to demonstrate the practicability of DC market analysis, as a complementary method to that of time series, in the analysis of the financial market.
279

QoS-aware joint power and subchannel allocation algorithms for wireless network virtualization

Wei, Junyi January 2017 (has links)
Wireless network virtualization (WNV) is a promising technology which aims to overcome the network redundancy problems of the current Internet. WNV involves abstraction and sharing of resources among different parties. It has been considered as a long term solution for the future Internet due to its flexibility and feasibility. WNV separates the traditional Internet service provider’s role into the infrastructure provider (InP) and service provider (SP). The InP owns all physical resources while SPs borrow such resources to create their own virtual networks in order to provide services to end users. Because the radio resources is finite, it is sensible to introduce WNV to improve resources efficiency. This thesis proposes three resource allocation algorithms on an orthogonal frequency division multiple access (OFDMA)-based WNV transmission system aiming to improve resources utility. The subject of the first algorithm is to maximize the InP and virtual network operators’ (VNOs’) total throughput by means of subchannel allocation. The second one is a power allocation algorithm which aims to improve VNO’s energy efficiency. In addition, this algorithm also balances the competition across VNOs. Finally, a joint power and subchannel allocation algorithm is proposed. This algorithm tries to find out the overall transmission rate. Moreover, all the above alogorithms consider the InP’s quality of service (QoS) requirement in terms of data rate. The evaluation results indicates that the joint resource allocation algorithm has a better performance than others. Furthermore, the results also can be a guideline for WNV performance guarantees.
280

Automation bias : exploring causal mechanisms and potential mitigation strategies

Gadala, M. January 2017 (has links)
Automated decision support tools are designed to aid users and improve their performance in certain tasks by providing advice in the form of prompts, alarms, assessments, or recommendations. However, recent evidence suggests that sometimes use of such tools introduces decision errors that are not made without the tool. We refer to this phenomenon as “automation bias” (AB), resulting in a broader definition of this term than used by many authors. Sometimes, such automation-induced errors can even result in overall performance (in terms of correct decisions) which is actually worse with the tool than without it. Our literature review reveals an emphasis on mediators affecting automation bias and some mitigation strategies aimed at reducing it. However, there is a lack of research on the cognitive causal explanations for automation bias and on adaptive mitigation strategies that result in tools that adapt to the needs and characteristics of individual users. This thesis aims to address some of these gaps in the literature and focuses on systems consisting of a human and an automated tool which does not replace, but instead supports the human towards making a decision, with the overall responsibility lying with the human user. The overall goal of this thesis is to help reduce the rate of automation bias through a better understanding of its causes and the proposal of innovative, adaptive mitigation strategies. To achieve this, we begin with an extensive literature review on automation bias including examples, mediators, explanations, and mitigations while identifying areas for further research. This review is followed by the presentation of three experiments aimed at reducing the rate of AB in different ways: (1) an experiment to explore causal mechanisms of automation bias, the effect of the mere presence of tool advice before its presentation and the effect of the sequence of tool advice in a glaucoma risk calculator environment, (2) simulations that apply concepts of diversity to human + human systems to improve system performance in a breast cancer double reading programme, and (3) an experiment to study the possibility of improving system performance by tailoring tool setting (sensitivity / specificity combination) for groups of similarly skilled users and cases of similar difficulty level using a spellchecking tool. Results from the glaucoma experiment provide evidence of the effect of the presence of tool advice on user decisions - even before its presentation, as well as evidence of a newly introduced cognitive mechanism (users’ strategic change in decision threshold) which may account for some automation bias errors previously observed but unexplained in the literature. Results from the double reading experiment provide evidence of the benefits of diversity in improving system performance. Finally, results from the spell checker experiment provide evidence that groups of similarly skilled users perform better at different tool settings, that the same group of users perform better using a different tool setting in difficult versus easy tasks, and that use of simple models of user behaviour may allow the prediction, among a subset of tool settings for a certain tool, the tool setting that would be most appropriate for each user ability group and class of case difficulty.

Page generated in 0.0715 seconds