Spelling suggestions: "subject:"1776 computer software"" "subject:"1776 coomputer software""
271 |
Model transformation dependability evaluation by the automated creation of model generatorsShah, Seyyed Madasar Ali January 2012 (has links)
This thesis is on the automatic creation of model generators to assist the validation of model transformations. The model driven software development methodology advocates models as the main artefact to represent software during development. Such models are automatically converted, by transformation tools, to apply in different stages of development. In one application of the method, it becomes possible to synthesise software implementations from design models. However, the transformations used to convert models are man-made, and so prone to development error. An error in a transformation can be transmitted to the created software, potentially creating many invalid systems. Evaluating that model transformations are reliable is fundamental to the success of modelling as a principle software development practice. Models generated via the technique presented in this thesis can be applied to validate transformations. In several existing transformation validation techniques, some form of conversion is employed. However, those techniques do not apply to validate the conversions used there-in. A defining feature of the current presentation is the utilization of transformations, making the technique self-hosting. That is, an implementation of the presented technique can create generators to assist model transformations validation and to assist validation of that implementation of the technique.
|
272 |
Targeting the automatic : nonconscious behaviour change using technologyPinder, Charlie January 2018 (has links)
Digital interventions have great potential to support people to change their behaviour. However, most interventions focus on strategies that target limited conscious resources, reducing their potential impact. We outline how these may fail in the longer-term due to issues with theory, users and technology. We propose an alternative: the direct targeting of nonconscious processes to achieve behaviour change. We synthesise Dual Process Theory, modern habit theory and Goal Setting Theory, which together model how users form and break nonconscious behaviours, into an explanatory framework to explore nonconscious behaviour change interventions. We explore the theoretical and practical implications of this approach, and apply it to a series of empirical studies. The studies explore nonconscious-targeting interventions across a continuum of conscious attention required at the point of behavioural action, from high (just-in-time reminders within Implementation Intentions) to medium (training paradigms within cognitive bias modification) to low (subliminal priming). The findings show that these single-nonconscious-target interventions have mixed results in in-the-wild and semi-controlled conditions. We conclude by outlining how interventions might strategically deploy multiple interventions that target the nonconscious at differing levels of conscious attention, and by identifying promising avenues of future research.
|
273 |
Community detection in complex networks using evolutionary computationJia, Guanbo January 2017 (has links)
In real world many complex systems can be naturally represented as complex networks of which one distinctive feature is the community structure. The community detection, i.e., identifying the community structure, provides insight into the relationship and interaction between network function and topology and has become increasingly important in many scientific fields. In this thesis, we firstly propose a cooperative coevolutionary module identification algorithm named CoCoMi to address the scalability problem when detecting community structures in especially medium and large-scale complex networks. Secondly, we propose a consensus community detection algorithm based on the multimodal optimization and fast Surprise named CoCoMOS to detect community structures in complex networks. Thirdly, we propose an adaptive ensemble selection and multimodal optimization based consensus community detection algorithm named MASCOD to find high quality and stable consensus partitions of community structures in complex networks. The performance of these three proposed algorithms is evaluated on some well-known social, artificial and biological complex networks and experimental results demonstrate that all these three proposed algorithms have very competitive performance compared with other state-of-the-art community detection algorithms.
|
274 |
Formalised responsibility modelling for automated socio-technical systems analysisSimpson, Robbie January 2017 (has links)
Modelling the structure of social-technical systems as a basis for informing software system design is a difficult compromise. Formal methods struggle to capture the scale and complexity of the heterogeneous organisations that use technical systems. Conversely, informal approaches lack the rigour needed to inform the software design and construction process or enable automated analysis. We revisit the concept of responsibility modelling, which models social technical systems as a collection of actors who discharge their responsibilities, whilst using and producing resources in the process. In this thesis responsibility modelling is formalised as a structured approach for socio-technical system specification and modelling, with well-defined semantics and support for automated structure and validity analysis. We provide structured definitions for entity types and relations, and define the semantics of delegation and dependency. A constraint logic is introduced, providing simple specification of complex interactions between entities. Additionally, we introduce the ability to explicitly model uncertainty. To support this formalism, we present a new software toolkit that supports modelling and automatic analysis of responsibility models in both graphical and textual form. The new methodology is validated by applying it to case studies across different problem domains. A study of nuclear power station emergency planning is validated by comparison to a similar study performed with earlier forms of responsibility modelling, and a study of the TCAS mid-air collision avoidance system is validated by evaluation with domain experts. Additionally, we perform an explorative study of responsibility modelling understanding and applicability through a qualitative study of modellers.
|
275 |
Increasing service visibility for future, softwarised air traffic management data networksWhite, Kyle John Sinclair January 2017 (has links)
Air Traffic Management (ATM) is at an exciting frontier. The volume of air traffic is reaching the safe limits of current infrastructure. Yet, demand for more air traffic continues. To meet capacity demands, ATM data networks are increasing in complexity with: greater infrastructure integration, higher availability and precision of services; and the introduction of unmanned systems. Official recommendations into previous disruptive outages have high-lighted the need for operators to have richer monitoring capabilities and operational systems visibility, on-demand, in response to challenges. The work presented in this thesis, helps ATM operators better understand and increase visibility into the behaviour of their services and infrastructure, with the primary aim to inform decision-making to reduce service disruption. This is achieved by combining a container-based NFV framework with Software- Defined Networking (SDN). The application of SDN+NFV in this work allows lightweight, chain-able monitoring and anomaly detection functions to be deployed on-demand, and the appropriate (sub)set of network traffic routed through these virtual network functions to provide timely, context-specific information. This container-based function deployment architecture, allows for punctual in-network processing through the instantiation of custom functionality, at appropriate locations. When accidents do occur, such as the crash of a UAV, the lessons learnt should be integrated into future systems. For one such incident, the accident investigation identified a telemetry precursor an hour prior. The function deployment architecture allows operators to extend and adapt their network infrastructure, to incorporate the latest monitoring recommendations. Furthermore, this work has examined relationships in application-level information and network layer data representing individual examples of a wide range of generalisable cases including: between the cyber and physical components of surveillance data, the rate of change in telemetry to determine abnormal aircraft surface movements, and the emerging behaviour of network flooding. Each of these examples provide valuable context-specific benefits to operators and a generalised basis from which further tools can be developed to enhance their understanding of their networks.
|
276 |
Adaptive multiple importance sampling for Gaussian processes and its application in social signal processingXiong, Xiaoyu January 2017 (has links)
Social signal processing aims to automatically understand and interpret social signals (e.g. facial expressions and prosody) generated during human-human and human-machine interactions. Automatic interpretation of social signals involves two fundamentally important aspects: feature extraction and machine learning. So far, machine learning approaches applied to social signal processing have mainly focused on parametric approaches (e.g. linear regression) or non-parametric models such as support vector machine (SVM). However, these approaches fall short of taking into account any uncertainty as a result of model misspecification or lack interpretability for analyses of scenarios in social signal processing. Consequently, they are less able to understand and interpret human behaviours effectively. Gaussian processes (GPs), that have gained popularity in data analysis, offer a solution to these limitations through their attractive properties: being non-parametric enables them to flexibly model data and being probabilistic makes them capable of quantifying uncertainty. In addition, a proper parametrisation in the covariance function makes it possible to gain insights into the application under study. However, these appealing properties of GP models hinge on an accurate characterisation of the posterior distribution with respect to the covariance parameters. This is normally done by means of standard MCMC algorithms, which require repeated expensive calculations involving the marginal likelihood. Motivated by the desire to avoid the inefficiencies of MCMC algorithms rejecting a considerable number of expensive proposals, this thesis has developed an alternative inference framework based on adaptive multiple importance sampling (AMIS). In particular, this thesis studies the application of AMIS for Gaussian processes in the case of a Gaussian likelihood, and proposes a novel pseudo-marginal-based AMIS (PM-AMIS) algorithm for non-Gaussian likelihoods, where the marginal likelihood is unbiasedly estimated. Experiments on benchmark data sets show that the proposed framework outperforms the MCMC-based inference of GP covariance parameters in a wide range of scenarios. The PM-AMIS classifier - based on Gaussian processes with a newly designed group-automatic relevance determination (G-ARD) kernel - has been applied to predict whether a Flickr user is perceived to be above the median or not with respect to each of the Big-Five personality traits. The results show that, apart from the high prediction accuracies achieved (up to 79% depending on the trait), the parameters of the G-ARD kernel allow the identification of the groups of features that better account for the classification outcome and provide indications about cultural effects through their weight differences. Therefore, this demonstrates the value of the proposed non-parametric probabilistic framework for social signal processing. Feature extraction in signal processing is dominated by various methods based on short time Fourier transform (STFT). Recently, Hilbert spectral analysis (HSA), a new representation of signal which is fundamentally different from STFT has been proposed. This thesis is also the first attempt to investigate the extraction of features from this newly proposed HSA and its application in social signal processing. The experimental results reveal that, using features extracted from the Hilbert spectrum of voice data of female speakers, the prediction accuracy can be achieved by up to 81% when predicting their Big-Five personality traits, and hence show that HSA can work as an effective alternative to STFT for feature extraction in social signal processing.
|
277 |
Inferring the geolocation of tweets at a fine-grained levelGonzalez Paule, Jorge David January 2019 (has links)
Recently, the use of Twitter data has become important for a wide range of real-time applications, including real-time event detection, topic detection or disaster and emergency management. These applications require to know the precise location of the tweets for their analysis. However, approximately 1% of the tweets are finely-grained geotagged, which remains insufficient for such applications. To overcome this limitation, predicting the location of non-geotagged tweets, while challenging, can increase the sample of geotagged data to support the applications mentioned above. Nevertheless, existing approaches on tweet geolocalisation are mostly focusing on the geolocation of tweets at a coarse-grained level of granularity (i.e., city or country level). Thus, geolocalising tweets at a fine-grained level (i.e., street or building level) has arisen as a newly open research problem. In this thesis, we investigate the problem of inferring the geolocation of non-geotagged tweets at a fine-grained level of granularity (i.e., at most 1 km error distance). In particular, we aim to predict the geolocation where a given tweet was generated using its text as a source of evidence. This thesis states that the geolocalisation of non-geotagged tweets at a fine-grained level can be achieved by exploiting the characteristics of the 1\% of already available individual finely-grained geotagged tweets provided by the Twitter stream. We evaluate the state-of-the-art, derive insights on their issues and propose an evolution of techniques to achieve the geolocalisation of tweets at a fine-grained level. First, we explore the existing approaches in the literature for tweet geolocalisation and derive insights on the problems they exhibit when adapted to work at a fine-grained level. To overcome these problems, we propose a new approach that ranks individual geotagged tweets based on their content similarity to a given non-geotagged. Our experimental results show significant improvements over previous approaches. Next, we explore the predictability of the location of a tweet at a fine-grained level in order to reduce the average error distance of the predictions. We postulate that to obtain a fine-grained prediction a correlation between similarity and geographical distance should exist, and define the boundaries were fine-grained predictions can be achieved. To do that, we incorporate a majority voting algorithm to the ranking approach that assesses if such correlation exists by exploiting the geographical evidence encoded within the Top-N most similar geotagged tweets in the ranking. We report experimental results and demonstrate that by considering this geographical evidence, we can reduce the average error distance, but with a cost in coverage (the number of tweets for which our approach can find a fine-grained geolocation). Furthermore, we investigate whether the quality of the ranking of the Top-N geotagged tweets affects the effectiveness of fine-grained geolocalisation, and propose a new approach to improve the ranking. To this end, we adopt a learning to rank approach that re-ranks geotagged tweets based on their geographical proximity to a given non-geotagged tweet. We test different learning to rank algorithms and propose multiple features to model fine-grained geolocalisation. Moreover, we investigate the best performing combination of features for fine-grained geolocalisation. This thesis also demonstrates the applicability and generalisation of our fine-grained geolocalisation approaches in a practical scenario related to a traffic incident detection task. We show the effectiveness of using new geolocalised incident-related tweets in detecting the geolocation of real incidents reports, and demonstrate that we can improve the overall performance of the traffic incident detection task by enhancing the already available geotagged tweets with new tweets that were geolocalised using our approach. The key contribution of this thesis is the development of effective approaches for geolocalising tweets at a fine-grained level. The thesis provides insights on the main challenges for achieving the fine-grained geolocalisation derived from exhaustive experiments over a ground truth of geotagged tweets gathered from two different cities. Additionally, we demonstrate its effectiveness in a traffic incident detection task by geolocalising new incident-related tweets using our fine-grained geolocalisation approaches.
|
278 |
Task recovery in self-organised multi-agent systems for distributed domainsAl-Karkhi, A. January 2018 (has links)
Grid computing and cloud systems are distributed systems which provide substantial widely-accessible services to resources. Quality of service is affected by the issues around resource allocation, sharing, task execution and node failure. The focus of this research is on task execution in distributed environments and the effects of node failure on service provision. Most methods in the literature which provide fault tolerance, use reactive techniques; these provide solutions to failure only after its occurrence. In contrast, this research argues that using multi-agent systems with self-organising capabilities can provide a proactive methodology which can improve task execution in open, dynamic and distributed environments. We have modelled a system of autonomous agents with heterogeneous resources and proposed a new delegation protocol for executing tasks within their time constraints. This helps avoid the loss of tasks and to improve efficiency. However, this method on its own is not sufficient in terms of task execution throughput, especially in the presence of agent failure. Hence, we propose, a self-organisation technique. This is represented in this research by two different mechanisms for creating organisations of agents with a certain structure; we suggest, in addition, the adoption of task delegation within the organisations. Adding an organisation structure with agent roles to the network enables smoother performance, increases task execution throughput and copes with agent failures. In addition, we study the failure problem as it manifests within the organisations and we suggest an improvement to the organisation structure which involves the use of another protocol and adding a new role. An exploratory study of dynamic, heterogeneous organisations of agents has also been conducted to understand the formation of organisations in a dynamic environment where agents may fail and new agents may join organisations. These conditions mean that new organisations may evolve and existing organisations may change.
|
279 |
Dynamic load balancing for massively multiplayer online gamesAbdulazeez, S. January 2018 (has links)
In recent years, there has been an important growth of online gaming. Today’s Massively Multiplayer Online Games (MMOGs) can contain millions of synchronous players scattered across the world and participating with each other within a single shared game. Traditional Client/Server architectures of MMOGs exhibit different problems in scalability, reliability, and latency, as well as the cost of adding new servers when demand is too high. P2P architecture provides considerable support for scalability of MMOGs. It also achieves good response times by supporting direct connections between players. This thesis proposes a novel hybrid Peer-to-Peer architecture for MMOGs and a new dynamic load balancing for massively multiplayer online games (MMOGs) based this hybrid Peer-to-Peer architecture. We have divided the game world space into several regions. Each region in the game world space is controlled and managed by using both a super-peer and a clone-super-peer. The region's super-peer is responsible for distributing the game update among the players inside the region, as well as managing the game communications between the players. However, the clone-super-peer is responsible for controlling the players' migration from one region to another, in addition to be the super-peer of the region when the super-peer leaves the game. In this thesis, we have designed and simulated a static and dynamic Area of Interest Management (AoIM) for MMOGs based on both architectures hybrid P2P and client-server with the possibility of players to move from one region to another. In this thesis also, we have designed and evaluated the static and dynamic load balancing for MMOGs based on hybrid P2P architecture. We have used OPNET Modeler 18.0 to simulate and evaluate the proposed system, especially standard applications, custom applications, TDMA and RX Group. Our dynamic load balancer is responsible for distributing the load among the regions in the game world space. The position of the load balancer is located between the game server and the regions. The results, following extensive experiments, show that low delay and higher traffic communication can be achieved using both of hybrid P2P architecture, static and dynamic AoIM, dynamic load balancing for MMOGs based on hybrid P2P system.
|
280 |
Factors affecting electronic commerce acceptance and usage in Libyan ICT organizationsMrabet, A. January 2018 (has links)
Studying how individuals accept new Informatics systems such as E-commerce is one of the main issues in information systems research. Libya needs to develop and implement E-commerce systems successfully; Libya has fallen far behind other similar states in the region with regard to internet and E-commerce uptake. Successful implementation of any system depends on its acceptance and use by potential users. This thesis investigates how managers make their decisions towards E-commerce systems. This investigation is conducted over two culturally similar countries, namely Libya and Tunisia to attempt to identify factors that differ between the two communities in terms of technology acceptance. The study is undertaken using the well-accepted Technology Acceptance model (TAM), but extends this by incorporating new factors, which have both direct and indirect influences on a managers' decision to use E-commerce technology. The thesis seeks to answer the research question "What factors affect a managers' decisions to accept and use E-commerce systems in Libyan and Tunisian companies?" This research adds more constructs to the original technology acceptance model, which are adapted from the theory of reasoned action and the theory of planned behaviour. This research adopts action research, case study and questionnaire survey methods to test the 18 hypotheses. The results confirm the value of the new extended technology acceptance model and hence represent a contribution to the literature of computer systems adoption. The contribution of this research is developing the Technology Acceptance Model and testing it on the Usage of E-commerce in Libyan and Tunisian companies, the Extended Technology Acceptance Model appropriate to E-commerce (E-COMTAM) was based on the critical analysis of the TAM, with eight additional and new items. The dissertation has seven chapters; Chapter One was the concepts of E-commerce, Chapter Two: Theoretical Literature Review of Technology Acceptance Model, the third chapter covers research aims and objectives, Chapter Four: covers the research methodology , Chapter Five research variables and hypotheses, Chapter Six: covers the data analysis and the research results. Finally, Chapter Seven is the conclusion and recommendations.
|
Page generated in 0.0872 seconds