• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 38
  • Tagged with
  • 258
  • 258
  • 258
  • 222
  • 221
  • 219
  • 48
  • 26
  • 18
  • 18
  • 17
  • 17
  • 16
  • 14
  • 13
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
91

A biodiversity approach to cyber security

Jackson, Jennifer T. January 2017 (has links)
Cyber crime is a significant threat to modern society that will continue to grow as technology is integrated further into our lives. Cyber attackers can exploit vulnerabilities to access computing systems and propagate malware. Of growing concern is the use of multiple exploits across layers of the software stack, plus faster criminal response times to newly disclosed vulnerabilities creating surges in attacks before signature-based malware protection can take effect. The wide scale adoption of few software systems fuels the problem, allowing identical vulnerabilities to be exploited across networks to maximise infection in a single attack. This requires new perspectives to tackle the threat. Biodiversity is critical in the functioning of healthy ecosystems. Whilst the idea of diversity benefiting computer security is not new, there are still gaps in understanding its advantages. A mathematical and an agent-based model have been developed using the ecosystem as a framework. Biodiversity is generated by individualised software stacks defined as genotypes with multiple loci. The models allow the protection offered by diversity to be quantified for ad hoc networks which are expected to become prevalent in the future by specifying how much diversity is needed to tolerate or mitigate two abstract representations of malware encompassing different ways multiple exploits target software stack layers. Outputs include the key components of ecosystem stability: resistance and resilience. Results show that diversity by itself can reduce susceptibility, increase resistance, and increase the time taken for malware to spread, thereby allowing networks to tolerate malware and maintain Quality of Service. When dynamic diversity is used as part of a multi-layered defence strategy with additional mechanisms such as blacklisting, virtualisation, and recovery through patching and signature based protection, diversity becomes more effective since the power of dynamic software updating can be utilised to mitigate attacks whilst maintaining network operations.
92

Mechanism design for fair allocation on uniform machines

Qu, Ruini January 2018 (has links)
In traditional machine scheduling problems, a central decision maker, provided with all the relevant information about a system, is asked to derive an allocation scheme that optimizes some global objective, while simultaneously satisfying all the side constraints of the problem. However, since the emergence of the Internet as a computation platform, the assumption of information completeness does not hold anymore and algorithm designers are encouraged to reconsider the problem from a decentralized perspective. Most importantly, when decisions are made by independent agents, it is more likely that a rational agent will implement the strategy in such a way that maximizes their own interests, regardless of the overall system performance. Such situations require algorithm designers to not only focus on the global performance of the system, but also to take into account the strategic behaviour of the individuals involved. Algorithmic Mechanism Design (AMD), a term coined by Nisan and Ronen (1999), specifcally targets this kind of problem, where part of the input is under the control of selfish agents who do not have an incentive to tell the truth, unless truth-telling is for their own good. This type of design endeavours to merge the challenges from two classic disciplines: algorithm design in computer science, and mechanism design in game theory. The former emphasizes the importance of the computational e - ciency of an algorithm, while ignoring the elements related to incentives; the latter, instead, normally yields game theoretic outcomes with poor computations. AMD, on the other hand, aims to present good game theoretic properties and good computational properties at the same time. Guided by the idea of AMD, research has been conducted on scheduling problems with various models and various objectives. Among them, some of the most popular objectives include the minimization of the maximum completion time (also known as the makespan), and the maximization of the minimum completion time (also known as the cover). Minimizing the makespan is naturally related to effciency, as it ensures that the entire job set is completed within the shortest possible time; maximizing cover, instead, embodies the concept of fairness from the machine owner's perspective in the sense that a machine will not get exemption due to its slowness. However, it can be argued that the fairness embodied by both objectives is only to a limited extent as they both can lead to extreme situations. Fairness is an important social concept that has not been well considered in the literature of AMD. This is surprising given that \each person possesses an inviolability founded on justice that even the welfare of the society as a whole cannot override" (Rawls, 2009, pg.3). To concretely state the importance of fairness in the scheduling context, let us consider the problem faced by the U.S. Federal Aviation Administration. Billions of monetary losses are incurred as a result of unpredictable system delays each year. To improve this situation, proposals have been raised by scholars which have guaranteed more efficient schedules and claimed greater savings in costs. Unfortunately, few of those proposals have been implemented in practice, mainly because they fail to take the issue of fairness into consideration (Bertsimas and Patterson, 2000). Fairness plays a key role in resource allocation, especially in socially oriented areas, including education, medical systems, and businesses. Although it is in the interest of the central authority to achieve system efficiency when allocating resources, individual players tend to care more about their own interests. If they cannot maximize their own benefits, then they at least want to be treated fairly. As a special case of resource allocation, machine scheduling problems also face similar challenges deriving from the players' desire for fairness. To achieve higher levels of fairness, we propose a new objective called minimizing the maximum deviation, which aims to minimize the maximum deviation between the completion time of each individual machine and the average completion time of the system, calculated as the sum of all the job sizes divided by the sum of all the machine speeds. To the best of our knowledge, this objective has not been considered by others before.
93

Variability of structurally constrained and unconstrained functional connectivity in schizophrenia

Yao, Ye January 2016 (has links)
In this thesis, entropy is used to characterize intrinsic ageing properties of the human brain. Analysis of fMRI data from a large dataset of individuals, using resting state BOLD signals, demonstrated that a functional connectivity entropy associated with brain activity increases with age. During an average lifespan, the entropy, which was calculated from a population of individuals, increased by approximately 0.1 bits, due to correlations in BOLD activity becoming more widely distributed. This is attributed to the number of excitatory neurons and the excitatory conductance decreasing with age. Incorporating these properties into a computational model leads to quantitatively similar results to the fMRI data. The dataset involved males and females and significant differences were found between them. The entropy of males at birth was lower than that of females. However, the entropies of the two sexes increase at different rates, and intersect at approximately 50 years; after this age, males have a larger entropy. In addition, the connectivity between different brain areas provides evidence about normal function and dysfunction. Changes are described in the distribution of these connectional strengths in schizophrenia using a large sample of resting-state fMRI data. The functional connectivity entropy, which measures the dispersion of the functional connectivity distribution, was lower in patients with schizophrenia than in controls, reflecting a reduction in both strong positive and negative correlations between brain regions. The decrease in the functional connectivity entropy was strongly associated with an increase in the positive, negative, and general symptoms. Using an integrate-and-fire simulation model based on anatomical connectivity, it is shown that a reduction in the efficacy of the NMDA mediated excitatory synaptic inputs can reduce the functional connectivity entropy to resemble the pattern seen in schizophrenia. Spatial variation in connectivity is an integral aspect of the brain's architecture. In the absence of this variability, the brain may act as a single homogenous entity without regional specialization. In this thesis, we investigate the variability in functional links categorized on the basis of the presence of direct structural paths (primary) or indirect paths mediated by one (secondary) or more (tertiary) brain regions ascertained by diffusion tensor imaging. We quantified the variability in functional connectivity using an unbiased estimate of unpredictability (functional connectivity entropy) in a neuropsychiatric disorder where structure-function relationship is considered to be abnormal. 34 patients and 32 healthy controls underwent DTI and resting state functional MRI scans. Less than one-third (27.4% in patients, 27.85% in controls) of functional links between brain regions were regarded as direct primary links on the basis of DTI tractography, while the rest were secondary or tertiary. The most significant changes in the distribution of functional connectivity in schizophrenia occur in indirect tertiary paths with no direct axonal linkage in both early (p=0.0002, d=1.46) and late (p=1_10).
94

Macro-micro approach for mining public sociopolitical opinion from social media

Wang, Bo January 2017 (has links)
During the past decade, we have witnessed the emergence of social media, which has prominence as a means for the general public to exchange opinions towards a broad range of topics. Furthermore, its social and temporal dimensions make it a rich resource for policy makers and organisations to understand public opinion. In this thesis, we present our research in understanding public opinion on Twitter along three dimensions: sentiment, topics and summary. In the first line of our work, we study how to classify public sentiment on Twitter. We focus on the task of multi-target-specific sentiment recognition on Twitter, and propose an approach which utilises the syntactic information from parse-tree in conjunction with the left-right context of the target. We show the state-of-the-art performance on two datasets including a multi-target Twitter corpus on UK elections which we make public available for the research community. Additionally we also conduct two preliminary studies including cross-domain emotion classification on discourse around arts and cultural experiences, and social spam detection to improve the signal-to-noise ratio of our sentiment corpus. Our second line of work focuses on automatic topical clustering of tweets. Our aim is to group tweets into a number of clusters, with each cluster representing a meaningful topic, story, event or a reason behind a particular choice of sentiment. We explore various ways of tackling this challenge and propose a two-stage hierarchical topic modelling system that is efficient and effective in achieving our goal. Lastly, for our third line of work, we study the task of summarising tweets on common topics, with the goal to provide informative summaries for real-world events/stories or explanation underlying the sentiment expressed towards an issue/entity. As most existing tweet summarisation approaches rely on extractive methods, we propose to apply state-of-the-art neural abstractive summarisation model for tweets. We also tackle the challenge of cross-medium supervised summarisation with no target-medium training resources. To the best of our knowledge, there is no existing work on studying neural abstractive summarisation on tweets. In addition, we present a system for providing interactive visualisation of topic-entity sentiments and the corresponding summaries in chronological order. Throughout our work presented in this thesis, we conduct experiments to evaluate and verify the effectiveness of our proposed models, comparing to relevant baseline methods. Most of our evaluations are quantitative, however, we do perform qualitative analyses where it is appropriate. This thesis provides insights and findings that can be used for better understanding public opinion in social media.
95

Mobile learning security in Nigeria

Shonola, Shaibu A. January 2017 (has links)
Innovation in learning technologies is driven by demands to meet students’ needs and make knowledge delivery easier by Higher Education Institutions. The technologies could play an important role in extending the possibilities for teaching, learning, and research in higher educational institutions (HEIs). Mobile learning emerged from this innovation as a result of massive use in the number of mobile devices due to availability and affordability among students. The lightweight nature of mobile devices in comparison to textbooks is also a source of attraction for students. Competition in the mobile device industry is encouraging mobile developers to be innovative and constantly striving to introduce new features in the devices. Consequently, newer sources of risks are being introduced in mobile computing paradigm at production level. Similarly, many m-learning developers are interested in developing learning content and instruction without adequate consideration for security of stakeholders’ data, whereas mobile devices used in m-learning can potentially become vulnerable if the security aspects are neglected. The purpose of this research is to identify the security concerns in mobile learning from the users’ perspective based on studies conducted in HEIs in Nigeria. While the challenges of adopting mobile learning in Nigerian universities are enormous, this study identifies the critical security challenges that learners and other users may face when using mobile devices for educational purposes. It examines the effects on the users if their privacy is breached and provides recommendations for alleviating the security threats. This research also, after considering users’ opinions and evaluating relevant literature, proposes security frameworks for m-learning as bedrocks for designing or implementing a secured environment. In identifying the security threats, the study investigates components of mobile learning systems that are prone to security threats and the common attack routes in m-learning, most especially among students in Nigerian universities. In order to reduce the security threats, the research presents a mobile security enhancement app, designed and developed for android smart mobile devices to promote security awareness among students. The app can also identify some significant security weaknesses by scanning/checking for vulnerabilities in m-learning devices as well as reporting any security threat. The responsibilities of the stakeholders in ensuring risk free mobile learning environments are also examined.
96

Game theoretic models of networks security

Katsikas, Stamatios January 2017 (has links)
Decision making in the context of crime execution and crime prevention can be successfully investigated with the implementation of game-theoretic tools. Evolutionary and mean-field game theory allow for the consideration of a large number of interacting players organized in social and behavioural structures, which typically characterize this context. Alternatively, `traditional' game-theoretic approaches can be applied for studying the security of an arbitrary network on a two player non-cooperative game. Theoretically underpinned by these instruments, in this thesis we formulate and analyse game-theoretic models of inspection, corruption, counter- terrorism, patrolling, and similarly interpreted paradigms. Our analysis suggests optimal strategies for the involved players, and illustrates the long term behaviour of the introduced systems. Our contribution is towards the explicit formulation and the thorough analysis of real life scenaria involving the security in network structures.
97

Predicting context and locations from geospatial trajectories

Thomason, Alasdair January 2017 (has links)
Adapting environments to the needs and preferences of their inhabitants is becoming increasingly important as the world population continues to grow. One way in which this can be achieved is through the provision of timely information, as well as through the personalisation of services. Providing personalisation in this way requires an understanding of both the historical and future actions of individuals. Using geospatial trajectories collected from personal location-aware hardware, e.g. smartphones, as a basis, this thesis explores the extent to which we can leverage the latent knowledge in such trajectories to understand the historic and future behaviours of individuals. In this thesis, several machine learning tools for the task are presented, including the development of a novel clustering algorithm that can identify locations where people spend their time while disregarding noise. The knowledge exposed by such a system is then enhanced with a procedure for identifying geographic features that the person was interacting with, providing information on what the user may have been doing at that time. Interactions with these features are subsequently used as a basis for understanding user actions through a new contextual clustering approach that identifies periods of time where the user may have been performing similar activities or have had similar goals. Combined, the presented techniques provide a basis for learning about the actions of individuals. To further enhance this knowledge, however, the research presented in this thesis concludes with the presentation of a new machine learning model capable of summarising and predicting the future context of individuals where only geospatial trajectories are required to be collected from the user. Throughout this work, the potential benefits offered by geospatial trajectories are explored, with thorough explorations and evaluations of the proposed techniques made alongside comparisons to existing approaches.
98

Manifold learning for emulations of computer models

Xing, Wei January 2016 (has links)
Computer simulations are widely used in scientific research and engineering areas. Thought they could provide accurate result, the computational expense is normally high and thus hinder their applications to problems, where repeated evaluations are required, e.g, design optimization and uncertainty quantification. For partial differential equation (PDE) models the outputs of interest are often spatial fields, leading to high-dimensional output spaces. Although emulators can be used to find faithful and computationally inexpensive approximations of computer models, there are few methods for handling high-dimensional output spaces. For Gaussian process (GP) emulation, approximations of the correlation structure and/or dimensionality reduction are necessary. Linear dimensionality reduction will fail when the output space is not well approximated by a linear subspace of the ambient space in which it lies. Manifold learning can overcome the limitations of linear methods if an accurate inverse map is available. In this thesis, manifold learning is applied to construct GP emulators for very high-dimensional output spaces arising from parameterised PDE model simulations. Artificial neural network (ANN) support vector machine (SVM) emulators using manifold learning are also studied. A general framework for the inverse map approximation and a new efficient method for diffusion maps were developed. The manifold learning based emulators are then to extend reduced order models (ROMs) based on proper orthogonal decomposition to dynamic, parameterized PDEs. A similar approach is used to extend the discrete empirical interpolation method (DEIM) to ROMs for nonlinear, parameterized dynamic PDEs.
99

Visually lossless coding for the HEVC standard : efficient perceptual quantisation contributions for HEVC

Prangnell, Lee January 2017 (has links)
In the context of video compression, visually lossless coding refers to a form of perceptual compression. The objectives are as follows: i) to lossy code a raw video sequence to the lowest possible bitrate; ii) to ensure that the compressed sequence is perceptually identical to the raw video data. Because of the vast bitrate reductions which cannot otherwise be achieved, the research and development of visually lossless coding techniques (e.g., perceptual quantisation methods) is considered to be important in contemporary video compression research, particularly for the High Efficiency Video Coding (HEVC) standard. The default quantisation techniques in HEVC — namely, Uniform Reconstruction Quantisation (URQ) and Rate Distortion Optimised Quantisation (RDOQ) — are not perceptually optimised. Neither URQ nor RDOQ take into account the Modulation Transfer Function (MTF)-based visual masking properties of the Human Visual System (HVS); e.g., luma and chroma spatial masking. Moreover, URQ and RDOQ do not intrinsically possess the capacity to distinguish luma data from chroma data. Both of these shortcomings can lead to coding inefficiency (i.e., wasting bits by not removing perceptually irrelevant data). Therefore, it is desirable to develop visually lossless coding (perceptual quantisation) techniques for HEVC. For example, by taking chrominance masking into account, perceptual quantisation techniques can be designed to discard — to a very high degree — chroma-based psychovisual redundancies from the chroma channels in raw YCbCr video data. To this end, four novel perceptual quantisation contributions are proposed in this thesis. In Chapter 3, a novel transform coefficient-level perceptual quantisation method is proposed. In HEVC, each frequency sub-band in the Discrete Cosine Transform (DCT) frequency domain constitutes a different level of perceptual importance to the HVS. In terms of perceptual importance, the DC coefficient (very low frequency) is the most important transform coefficient, whereas the AC coefficients farthest away from the DC coefficient (very high frequency AC coefficients) are the least perceptually relevant. Therefore, the proposed technique is designed to quantise AC coefficients based on their Euclidean distance from the DC coefficient. In Chapter 4, two novel perceptual quantisation methods are proposed, which are based on HVS visual masking in the spatial domain. The first technique operates at the Coding Unit (CU) level and the second operates at the Coding Block (CB) level. Both techniques exploit the fact that the HVS can tolerate high levels of distortion in high variance (busy) regions of compressed luma and chroma data. The CU-level method adjusts the Quantisation Parameter (QP) of a 2N×2N CU based on cross colour channel variance computations. The CB-level technique separately adjusts the QP of the Y, Cb and Cr CBs in a CU based on separate variance computations in each colour channel. In Chapter 5, a novel CB-level luma and chroma perceptual quantisation technique — based on a Just Noticeable Distortion (JND) model — is proposed for HEVC. The objective of this technique is to attain visually lossless coding at extremely low bitrates by exploiting HVS-related luminance adaptation and chrominance adaptation. Consequently, this facilitates JND perceptual quantisation based on luminance spatial masking and chrominance spatial masking. The proposed technique applies high levels of perceptual quantisation to luma and chroma data, which is achieved by separately adjusting the Quantisation Step Sizes (QSteps) at the level of the Y CB, the Cb CB and the Cr CB in a CU. To the best of the author’s knowledge, this is the first JND-based perceptual quantisation technique that is compatible with high bit depth YCbCr data irrespective of its chroma sampling ratio. The novel techniques proposed in this thesis are evaluated thoroughly. The methodology utilised in the experiments consists of an exhaustive subjective visual quality assessment in addition to an extensive objective visual quality evaluation. The subjective evaluation is based on the International Telecommunications Union (ITU) standardised assessments known as ITU-R: Rec. P.910. In these tests, several participants undertake a considerable number of subjective visual inspections (e.g., spatiotemporal analyses of the compressed sequences versus the raw video data) to ascertain the efficacy of the proposed contributions. The objective visual quality evaluation includes quantifying the mathematical reconstruction quality of the video data compressed by the proposed techniques. This is carried out by employing the Structural Similarity Index (SSIM) and Peak Signal-to-Noise Ratio (PSNR) visual quality metrics.
100

Analytical modelling for the performance prediction and optimisation of near-neighbour structured grid hydrodynamics

Davis, James A. January 2017 (has links)
The advent of modern High Performance Computing (HPC) has facilitated the use of powerful supercomputing machines that have become the backbone of data analysis and simulation. With such a variety of software and hardware available today, understanding how well such machines can perform is key for both efficient use and future planning. With significant costs and multi-year turn-around times, procurement of a new HPC architecture can be a significant undertaking. In this work, we introduce one such measure to capture the performance of such machines – analytical performance models. These models provide a mathematical representation of the behaviour of an application in the context of how its various components perform for an architecture. By parameterising its workload in such a way that the time taken to compute can be described in relation to one or more benchmarkable statistics, this allows for a reusable representation of an application that can be applied to multiple architectures. This work goes on to introduce one such benchmark of interest, Hydra. Hydra is a benchmark 3D Eulerian structured mesh hydrocode implemented in Fortran, with which the explosive compression of materials, shock waves, and the behaviour of materials at the interface between components can be investigated. We assess its scaling behaviour and use this knowledge to construct a performance model that accurately predicts the runtime to within 15% across three separate machines, each with its own distinct characteristics. Further, this work goes on to explore various optimisation techniques, some of which see a marked speedup in the overall walltime of the application. Finally, another software application of interest with similar behaviour patterns, PETSc, is examined to demonstrate how different applications can exhibit similar modellable patterns.

Page generated in 0.0942 seconds