• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 38
  • Tagged with
  • 258
  • 258
  • 258
  • 222
  • 221
  • 219
  • 48
  • 26
  • 18
  • 18
  • 17
  • 17
  • 16
  • 14
  • 13
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
101

Understanding the topics and opinions from social media content

Zhou, Yiwei January 2017 (has links)
Social media has become one indispensable part of people’s daily life, as it records and reflects people’s opinions and events of interest, as well as influences people’s perceptions. As the most commonly employed and easily accessed data format on social media, a great deal of the social media textual content is not only factual and objective, but also rich in opinionated information. Thus, besides the topics Internet users are talking about in social media textual content, it is also of great importance to understand the opinions they are expressing. In this thesis, I present my broadly applicable text mining approaches, in order to understand the topics and opinions of user-generated texts on social media, to provide insights about the thoughts of Internet users on entities, events, etc. Specifically, I develop approaches to understand the semantic differences between language-specific editions of Wikipedia, when discussing certain entities from the related topical aspects perspective and the aggregated sentiment bias perspective. Moreover, I employ effective features to detect the reputation-influential sentences for person and company entities in Wikipedia articles, which lead to the detected sentiment bias. Furthermore, I propose neural network models with different levels of attention mechanism, to detect the stances of tweets towards any given target. I also introduce an online timeline generation approach, to detect and summarise the relevant sub-topics in the tweet stream, in order to provide Internet users with some insights about the evolution of major events they are interested in.
102

Introducing emotion-based personalisation to cancer websites : the impact of emotions on website personalisation and reuse intentions

Hadzidedic, Suncica January 2017 (has links)
Affective computing has received substantial attention in the recent time. However, its application to personalised online cancer services is understudied. Therefore, this research primarily explores the role of emotions in predicting the preference for personalisation features, and in forming behavioural intentions in cancer website usage. Secondly, this research seeks to understand whether users of cancer websites prefer to be offered emotion-based personalisation to other options – personalised or non-personalised. Emotion-based personalisation was implemented, in several phases, on the cancer website developed for the purpose of this research. A number of controlled experiments were carried out, in which users interacted with the cancer website and evaluated its personalisation features. The findings confirm that users more likely reuse a cancer website when they are satisfied with its personalisation services and find the website usable. Moreover, both negative emotions (e.g., sadness and fear) and positive ones (e.g., interest) encourage reuse intentions. Post-use negative emotions are primarily influenced by the website’s usability, while satisfaction with personalisation and usefulness of adaptive and adaptable services intensifies positive emotions. The website is perceived usable and it induces user satisfaction when its personalisation is considered useful. The findings imply that discrete emotions (of the nine basic emotions studied here) stimulate or discourage interaction with certain website features and content. Moreover, emotions experienced at the start of website use affect the perception about the usefulness of individual features available on the website. Generally, users experiencing positive emotions are eager to explore the website and be involved in the tailoring process. The effect of negative emotions is more difficult to generalise; it depends on the specific emotion and the personalisation feature in question. Overall, negative emotions are more likely to inhibit the use or perception of website features that require providing user personal information and interests, or entail extensive engagement from the user side. With regard to the second aim, this research suggests that emotion-based personalisation on a cancer website is preferred, however not significantly over generic personalisation or no personalisation at all. Nevertheless, the findings urged for further research. The survey and interview results consistently showed that: personalisation was perceived as useful, users were satisfied with it, that the website with emotion-based personalisation had the highest usability and most users prefer that type of personalisation. Moreover, repeat visitors and long-time cancer website users, who have been directly affected by cancer, decisively desired emotion-based personalisation. Overall, this research provides multiple theoretical and practical implications for personalisation adoption on cancer websites and stimulating reuse intentions. It recommends rules for adaptation and personalisation algorithms that incorporate user emotions. Moreover, it extends the existing theory and proposes a framework for understanding the emotion- and personalisation-related factors that influence intentions to revisit and reuse a personalised cancer website.
103

Convention emergence and destabilisation in multi-agent systems

Marchant, James M. January 2017 (has links)
Ensuring coordination amongst individual agents in multi-agent systems (MAS) helps to reduce clashes between them that waste resources and time and facilitates the capability of the agent population to solve mutually beneficial problems. Determining this coordinated behaviour is not always possible a priori due to technical issues such as lack of access to individual agents or computational issues due to the large number of possible clashing actions. Additionally, in systems lacking centralised authorities, dictating rules in a top-down perspective is difficult or impossible. Conventions represent a light-weight, decentralised and emergent solution to this problem. Acting as a socially-accepted rule on expected behaviour they help to focus and constrain agent interactions to facilitate coordination. Understanding how these conventions emerge and how they might be encouraged allows scalable coordination of behaviour within MAS with little computational or logistical overhead. In this thesis we consider how fixed strategy Intervention Agents (IAs) may be used to encourage and direct convention emergence in MAS. We explore their efficacy in doing so in various topologies, both static and time-varying dynamic networks, and propose a number of methods and techniques to increase this efficacy further. We consider how these IAs might be used to destabilise an existing convention, replacing it with a more desirable one and highlight the different methods required to do this. We also explore how various limitations such as time or observability of topological structure can impact the emergence of conventions and provide mechanisms to counteract these issues.
104

Visual attention for high-fidelity imaging

Bradley, Timothy January 2017 (has links)
Models of visual attention have many applications including but not limited to rendering, advertising, graphic design and road safety. The rise of high fidelity imaging technologies, such as high dynamic range content and physically-based rendering have created a need for more targeted models, however the data necessary for their creation is sparse. This thesis aims to expand the applicability of visual attention frameworks for high fidelity imaging both by introducing a new selective rendering method for adaptively adjusting the quality of rendered scenes and developing the necessary tools to validate existing and future models in high fidelity domains. This thesis first presents a method for exploiting visual attention, in a Physically- Based-Rendering (PBR) pipeline, by adjusting complexity of Bidirectional Reflectance Distribution Functions (BRDFs) in unimportant image regions. Thus, the presented method substitutes high accuracy, high cost models with low accuracy, low cost models in less salient regions. The efficacy of this method is evaluated through a subjective rating experiment. The results of the psychophysical experiment found some significant confusion between the hybrid and references images, which suggests that this can be employed as a tool to reduce computational costs. Furthermore, this thesis presents an experiment to assess the effect of high luminance levels on the viewing strategies of observers. This is accomplished through the creation of an HDR eye-tracking dataset consisting of eighty HDR images, shown at four distinct brightness levels. A statistical analysis of the resulting fixation density maps found that the reliability of LDR eye tracking data decreases as the peak brightness of an images increases. This suggests the need for reliable HDR eye-tracking datasets. Finally, this thesis presents an eye-tracking experiment and subjective survey to analyse the interaction of ambient light levels and screen brightness on visual fatigue and visual saliency. Results of the experiment show an increase in similarity between HDR and LDR fixations as environmental illumination increases, this is of particular note as standard practice calls for eye-tracking dataset to be captured in dark environments.
105

Energy-aware performance engineering in high performance computing

Roberts, Stephen I. January 2017 (has links)
Advances in processor design have delivered performance improvements for decades. As physical limits are reached, however, refinements to the same basic technologies are beginning to yield diminishing returns. Unsustainable increases in energy consumption are forcing hardware manufacturers to prioritise energy efficiency in their designs. Research suggests that software modifications will be needed to exploit the resulting improvements in current and future hardware. New tools are required to capitalise on this new class of optimisation. This thesis investigates the field of energy-aware performance engineering. It begins by examining the current state of the art, which is characterised by ad-hoc techniques and a lack of standardised metrics. Work in this thesis addresses these deficiencies and lays stable foundations for others to build on. The first contribution made includes a set of criteria which define the properties that energy-aware optimisation metrics should exhibit. These criteria show that current metrics cannot meaningfully assess the utility of code or correctly guide its optimisation. New metrics are proposed to address these issues, and theoretical and empirical proofs of their advantages are given. This thesis then presents the Power Optimised Software Envelope (POSE) model, which allows developers to assess whether power optimisation is worth pursuing for their applications. POSE is used to study the optimisation characteristics of codes from the Mantevo mini-application suite running on a Haswell-based cluster. The results obtained show that of these codes TeaLeaf has the most scope for power optimisation while PathFinder has the least. Finally, POSE modelling techniques are extended to evaluate the system-wide scope for energy-aware performance optimisation. System Summary POSE allows developers to assess the scope a system has for energy-aware software optimisation independent of the code being run.
106

Computational models of morphology's effects on cellular dynamics

Sayyid, Faiz January 2016 (has links)
Spatial effects such as cell shape, internal cellular organisation and cellular plasticity have very often been considered negligible in models of cellular pathways, and many existing simulation infrastructures do not take such effects into consideration. However, recent experimental results and systems level theories suggest that even small variations in shape can make a large difference to the fate of the cell. This is particularly the case when considering eukaryotic cells, which have a complex physical structure and many subtle control mechanisms. Bacteria are also interesting for their variation in shape, both between species and in different states of adaptation. In this thesis we perform simulations that quantify the effect of three aspects of morphology - external cellular shape, internal cellular organisation and processes that change the shape of the cell - on the behaviour of model cellular pathways. To perform these simulations we develop Reaction-Diffusion Cell (ReDi-Cell), a highly scalable General Purpose Graphics Processing Unit Computing (GPGPU) cell simulation infrastructure for the modelling of cellular pathways in spatially detailed environments. ReDi-Cell is validated against known-good simulations, prior to its use in new work. By measuring reaction trajectories and concentration gradients we quantify the responses of simulated cellular pathways to these three spatial aspects. Our results show that model cell behaviour is the composite of cellular morphology and reaction system. Different reaction systems display different dynamics even when placed in identical environments. Traditionally, computational approaches to cell biology have been focussed upon investigating how changes to reaction dynamics alter cellular behaviour. This thesis, on the other hand, demonstrates another way in which reaction dynamics can be altered, by changing the morphology of the cell.
107

Source location privacy in wireless sensor networks under practical scenarios : routing protocols, parameterisations and trade-offs

Gu, Chen January 2018 (has links)
As wireless sensor networks (WSNs) have been applied across a spectrum of application domains, source location privacy (SLP) has emerged as a significant issue, particularly in security-critical situations. In seminal work on SLP, several protocols were proposed as viable approaches to address the issue of SLP. However, most state-of-the-art approaches work under specific network assumptions. For example, phantom routing, one of the most popular routing protocols for SLP, assumes a single source. On the other hand, in practical scenarios for SLP, this assumption is not realistic, as there will be multiple data sources. Other issues of practical interest include network configurations. Thus, thesis addresses the impact of these practical considerations on SLP. The first step is the evaluation of phantom routing under various configurations, e.g., multiple sources and network configurations. The results show that phantom routing does not scale to handle multiple sources while providing high SLP at the expense of low messages yield. Thus, an important issue arises as a result of this observation that the need for a routing protocol that can handle multiple sources. As such, a novel parametric routing protocol is proposed, called phantom walkabouts, for SLP for multi-source WSNs. A large-scale experiments are conducted to evaluate the efficiency of phantom walkabouts. The main observation is that phantom walkabouts can provide high level of SLP at the expense of energy and/or data yield. To deal with these trade-offs, a framework that allows reasoning about trade-offs needs to develop. Thus, a decision theoretic methodology is proposed that allows reasoning about these trade-offs. The results showcase the viability of this methodology via several case studies.
108

Developing graph-based co-scheduling algorithms with GPU acceleration

Zhu, Huanzhou January 2016 (has links)
On-chip cache is often shared between processes that run concurrently on different cores of the same processor. Resource contention of this type causes the performance degradation to the co-running processes. Contention-aware co-scheduling refers to the class of scheduling techniques to reduce the performance degradation. Most existing contention-aware co-schedulers only consider serial jobs. However, there often exist both parallel and serial jobs in computing systems. This thesis aims to tackle these issues. We start with modelling the problem of co-scheduling the mix of serial and parallel jobs as an Integer Programming (IP) problem. Then we construct a co-scheduling graph to model the problem, and a set of algorithms are developed to find both optimal and near-optimal solutions. The results show that the proposed algorithms can find the optimal co-scheduling solution and that the proposed approximation technique is able to find the near optimal solutions. In order to improve the scalability of the algorithms, we use GPU to accelerate the solving process. A graph processing framework, called WolfPath, is proposed in this thesis. By taking advantage of the co-scheduling graph, WolfPath achieves significant performance improvement. Due to the long preprocessing time of WolfPath, we developed WolfGraph, a GPU-based graph processing framework that features minimal preprocessing time and uses the hard disk as a memory extension to solve large-scale graphs on a single machine equipped with a GPU device. Comparing with existing GPU-based graph processing frameworks, WolfGraph can achieve similar execution time but with minimal preprocessing time.
109

LnCm fault model : complexity and validation

Adamu-Fika, Fatimah January 2016 (has links)
Computer systems are ubiquitous in most aspects of our daily lives, as such the reliance of end users upon their correct and timely functioning is on the rise. With technology advancement, the functionality of these systems is increasingly being defined in software. On the other hand, feature sizes have drastically decreased, while feature density has increased. These hardware trends will keep happening as technology continues to advance. Consequently, power supply voltage is ever-decreasing and clock frequency and temperature hotspots are increasing. This steady reduction of integration scales is increasing the sensitivity of computer systems to different kinds of hardware faults. In particular, the likelihood of a single high-energy ion to cause double bit upsets (DBUs, due to its energy) or multiple bit upsets (MBUs, due to the incident angle) instead of single bits upsets (SBUs) is increasing. Furthermore, the likelihood of perturbations occurring in the logic circuits is also increasing. Owing to these hardware trends it has been projected that computer systems will expose such hardware faults to the software-level and accordingly the software is expected to tolerate such perturbations to maintain correct operations, i.e., the software needs to be dependable. Thus, defining and understanding the potential impact of such faults is required to propose the right mechanisms to tolerate their occurrence. To ascertain that software is dependable, it is important to validate the software system. This is achieved through the emulation of the type of faults that are likely to occur in the field during execution of the system, and through studying the effects of these faults on the system. Often, this validation process is achieved through a technique called fault injection that artificially perturbs the execution of the system through the emulation of hardware faults. Traditionally, the single bit-flip (SBF) model is used for emulating single event upsets (SEUs) and single event transients (SETs) in dependability validation. The model assumes that only an SEU or SET occurs during a single execution of the system. However, with MBUs becoming more prominent, the accuracy of the SBF model is limited. Hence, the need for including MBUs in software system dependability validation. MBUs may occur as multiple bit errors (MBEs) is a single location (memory word or register) or as single bits errors (SBEs) in several locations. Likewise, they may occur as MBEs in several locations. In the context of software-implemented fault injection (SWIFI), the injection of MBUs in all variables is infeasible due to the exponential size of the fault space, thereby making it necessary to carefully select those fault injection points that maximises the probability of causing a failure. A fault space, is the set of all possible fault under a given fault model. Consequently, research have started looking at a more tractable model, double bit upsets (DBU) in the form of double bit-flips within a single location, L1C2. However, with evidence of the possibility of corruption occurring chip wide, the applicability and accuracy of L1C2 is restricted. Following, this research focuses on MBUs occurring across multiple locations whilst seeking to address the exponential fault space problem associated with multiple fault injections. In general, the thesis analyses the complexity of selecting efficient fault-injection locations for injecting multiple MBUs. In particular, it formalises the problem of multiple bit-flip injections and found that the problem is NP-complete. There are various ways of addressing this complexity: (i) look for specific cases, (ii) look for heuristic and/or (iii) weaken the problem specification. Next, the thesis presents one approach for each of the aforementioned means of addressing complexity: - for the specific cases approach, the thesis presents a novel DBU fault model, that manifest as two single bit-flips across two locations. In particular, the research examines the relevance of the L2C1 fault model for system validation. It is found that the L2C1 fault model induces failure profile that is different from profiles induced by existing fault models - for the heuristic approach, the thesis uses an approach towards dependency aware fault injection strategies to extend the L2C1 fault model and the existing L1C2 fault model into LnCm (multiple location, multiple corruption) fault model, where n is the number of locations to target and m the maximum number of corruptions to inject in a given location. It proposes two heuristics to achieve this: first, select the set of potential locations and then select the subset of variables within these locations, and it examines the applicability of the proposed framework. - for the weakening of the problem specification approach, the thesis further refines the fault space and proposes a data mining approach to reduce the cost of multiple fault injections campaigns (in terms of number of multiple fault injections experiments performed). It presents an approach to refine the multiple fault injection points by identifying a subset of these points, whereby injection into this subset alone would be as efficient as injection into the entire set. These contributions are instrumental to advance multiple fault injections and make it an effective and practical approach for software system validation.
110

Platforms for deployment of scalable on- and off-line data analytics

Coetzee, Peter January 2017 (has links)
The ability to exploit the intelligence concealed in bulk data to generate actionable insights is increasingly providing competitive advantages to businesses, government agencies, and charitable organisations. The burgeoning field of Data Science, and its related applications in the field of Data Analytics, finds broader applicability with each passing year. This expansion of users and applications is matched by an explosion in tools, platforms, and techniques designed to exploit more types of data in larger volumes, with more techniques, and at higher frequencies than ever before. This diversity in platforms and tools presents a new challenge for organisations aiming to integrate Data Science into their daily operations. Designing an analytic for a particular platform necessarily involves “lock-in” to that specific implementation – there are few opportunities for algorithmic portability. It is increasingly challenging to find engineers with experience in the diverse suite of tools available as well as understanding the precise details of the domain in which they work: the semantics of the data, the nature of queries and analyses to be executed, and the interpretation and presentation of results. The work presented in this thesis addresses these challenges by introducing a number of techniques to facilitate the creation of analytics for equivalent deployment across a variety of runtime frameworks and capabilities. In the first instance, this capability is demonstrated using the first Domain Specific Language and associated runtime environments to target multiple best-in-class frameworks for data analysis from the streaming and off-line paradigms. This capability is extended with a new approach to modelling analytics based around a semantically rich type system. An analytic planner using this model is detailed, thus empowering domain experts to build their own scalable analyses, without any specific programming or distributed systems knowledge. This planning technique is used to assemble complex ensembles of hybrid analytics: automatically applying multiple frameworks in a single workflow. Finally, this thesis demonstrates a novel approach to the speculative construction, compilation, and deployment of analytic jobs based around the observation of user interactions with an analytic planning system.

Page generated in 0.1155 seconds