• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 4
  • 4
  • 4
  • Tagged with
  • 144
  • 30
  • 26
  • 24
  • 24
  • 19
  • 13
  • 11
  • 10
  • 10
  • 9
  • 8
  • 7
  • 6
  • 6
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
61

Hypergraph partitioning in the cloud

Lotfifar, Foad January 2016 (has links)
The thesis investigates the partitioning and load balancing problem which has many applications in High Performance Computing (HPC). The application to be partitioned is described with a graph or hypergraph. The latter is of greater interest as hypergraphs, compared to graphs, have a more general structure and can be used to model more complex relationships between groups of objects such as non-symmetric dependencies. Optimal graph and hypergraph partitioning is known to be NP-Hard but good polynomial time heuristic algorithms have been proposed. In this thesis, we propose two multi-level hypergraph partitioning algorithms. The algorithms are based on rough set clustering techniques. The first algorithm, which is a serial algorithm, obtains high quality partitionings and improves the partitioning cut by up to 71\% compared to the state-of-the-art serial hypergraph partitioning algorithms. Furthermore, the capacity of serial algorithms is limited due to the rapid growth of problem sizes of distributed applications. Consequently, we also propose a parallel hypergraph partitioning algorithm. Considering the generality of the hypergraph model, designing a parallel algorithm is difficult and the available parallel hypergraph algorithms offer less scalability compared to their graph counterparts. The issue is twofold: the parallel algorithm and the complexity of the hypergraph structure. Our parallel algorithm provides a trade-off between global and local vertex clustering decisions. By employing novel techniques and approaches, our algorithm achieves better scalability than the state-of-the-art parallel hypergraph partitioner in the Zoltan tool on a set of benchmarks, especially ones with irregular structure. Furthermore, recent advances in cloud computing and the services they provide have led to a trend in moving HPC and large scale distributed applications into the cloud. Despite its advantages, some aspects of the cloud, such as limited network resources, present a challenge to running communication-intensive applications and make them non-scalable in the cloud. While hypergraph partitioning is proposed as a solution for decreasing the communication overhead within parallel distributed applications, it can also offer advantages for running these applications in the cloud. The partitioning is usually done as a pre-processing step before running the parallel application. As parallel hypergraph partitioning itself is a communication-intensive operation, running it in the cloud is hard and suffers from poor scalability. The thesis also investigates the scalability of parallel hypergraph partitioning algorithms in the cloud, the challenges they present, and proposes solutions to improve the cost/performance ratio for running the partitioning problem in the cloud. Our algorithms are implemented as a new hypergraph partitioning package within Zoltan. It is an open source Linux-based toolkit for parallel partitioning, load balancing and data-management designed at Sandia National Labs. The algorithms are known as FEHG and PFEHG algorithms.
62

Activity recognition in event driven IoT-service architectures

Meissner, Stefan January 2016 (has links)
With the advent of the Internet-of-Things way more sensor-generated data streams came available that researchers want to exploit context from. Many researchers worked on context recognition for rather unimodal data in pervasive systems, but recent works about object virtualisation in the Internet-of-Things domain enable context-exploitation based on processing multi-modal information collected from pervasive systems. Additionally to the sensed data there is formalised knowledge about the real world objects emitted by IoT services as contributed by the author in [1], [2] and [3]. In this work an approach for context recognition is proposed that takes knowledge about virtual objects and its relationships into account in order to improve context recognition. The approach will only recognise context that has been predefined manually beforehand, no new context information can be exploited with the work proposed here. This work’s scope is about recognising the activity that a user is most likely involved in by observing the evolving context of a user of a pervasive system. As an assumption for this work the activities have to be modelled as graphs in which the nodes are situations observable by a pervasive system. The pervasive system to be utilised has to be built compliant to the Architectural Reference Model for the IoT (ARM) to which the author has contributed to in [4] and [5]. The hybrid context model proposed in this thesis is made of an ontology-based part as well as a probability-based part. Ontologies assist in adapting the probability distributions for the Hidden Markov Model-based recognition technique according to the current context. It could be demonstrated in this work that the context-aware adaptation of the recognition model increased the detection rate of the activity recognition system.
63

A knowledge management based cloud computing adoption decision making framework

Alhammadi, Abdullah January 2016 (has links)
Cloud computing represents a paradigm shift in the way that IT services are delivered within enterprises. There are numerous challenges for enterprises planning to migrate to cloud computing environment as cloud computing impacts multiple different aspects of an organisation and cloud computing adoption issues vary between organisations. A literature review identified that a number of models and frameworks have been developed to support cloud adoption. However, existing models and frameworks have been devised for technologically developed environments and there has been very little examination to determine whether the factors that affect cloud adoption in technologically developing countries are different. The primary research carried out for this thesis included an investigation of the factors that influence cloud adoption in Saudi Arabia, which is regarded as a technologically developing country. This thesis presents an holistic Knowledge Management Based Cloud Adoption Decision Making Framework which has been developed to support decision makers at all stages of the cloud adoption decision making process. The theoretical underpinnings for the research come from Knowledge Management, including the literature on decision making, organisational learning and technology adoption and technology diffusion theories. The framework includes supporting models and tools, combining the Analytical Hierarchical Process and Case Based Reasoning to support decision making at Strategic and Tactical levels and the Pugh Decision Matrix at the Operational level. The Framework was developed based on secondary and primary research and was validated with expert users. The Framework is customisable, allowing decision makers to set their own weightings and add or remove decision making criteria. The results of validation show that the framework enhances Cloud Adoption decision making and provides support for decision makers at all levels of the decision making process.
64

Analysing and quantifying the influence of system parameters on virtual machine co-residency in public clouds

Alabdulhafez, Abdulaziz January 2015 (has links)
Public Infrastructure-as-a-Service (IaaS) cloud promises significant efficiency to businesses and organisations. This efficiency is possible by allowing “co-residency” where Virtual Machines (VMs) that belong to multiple users share the same physical infrastructure. With co-residency being inevitable in public IaaS clouds, malicious users can leverage information leakage via side channels to launch several powerful attacks on honest co-resident VMs. Because co-residency is a necessary first step to launching side channel attacks, this motivates this thesis to look into understanding the co-residency probability (i.e. the probability that a given VM receives a co-resident VM). This thesis aims to analyse and quantify the influence of cloud parameters (such as the number of hosts and users) on the co-residency probability in four commonly used Placement Algorithms (PAs). These PAs are First Fit, Next Fit, Power Save and Random. This analysis then helps to identify the cloud parameters’ settings that reduce the coresidency probability in four PAs. Because there are many cloud parameters and parameters’ settings to consider, this forms the main challenge in this thesis. In order to overcome this challenge, fractional factorial design is used to reduce the number of required experiments to analyse and quantify the parameters’ influence in various settings. This thesis takes a quantitative experimental simulation and analytical prediction approach to achieve its aim. Using a purpose-built VM Co-residency simulator, (i) the most influential cloud parameters affecting co-residency probability in four PAs have been identified. Identifying the most influential parameters has helped to (ii) explore the best settings of these parameters that reduce the co-residency probability under the four PAs. Finally, analytical estimation, with the coexistence of different populations of attackers, has been derived to (iii) find the probability that a new co-residing VM belongs to an attacker. This thesis identifies the number of hosts to be the most influential cloud parameters on the coresidency probability in the four PAs. Also, this thesis presents evidence that VMs hosted in IaaS clouds that use Next Fit or Random are more resilient against receiving co-resident VMs compared to when First Fit or Power Save are used. Further, VMs in IaaS clouds with a higher number of hosts are less likely to exhibit co-residency. This thesis generates new insights into the potential of co-residency reduction to reduce the attack surface for side channel attacks. The outcome of this thesis is a plausible blueprint for IaaS cloud providers to consider the influence on the co-residency probability as an important selection factor for cloud settings and PAs.
65

Data confidentiality and risk management in cloud computing

Khan, Afnan Ullah January 2014 (has links)
Cloud computing can enable an organisation to outsource computing resources to gain economic benefits. Cloud computing is transparent to both the programmers and the users; as a result, it introduces new challenges when compared with previous forms of distributed computing. Cloud computing enables its users to abstract away from low level configuration (configuring IP addresses and routers). It creates an illusion that this entire configuration is automated. This illusion is also true for security services, for instance automating security policies and access control in the Cloud, so that companies using the Cloud perform only very high- level (business oriented) configuration. This thesis identifies research challenges related to security, posed by the transparency of distribution, abstraction of configuration and automation of services that entails Cloud computing. It provides solutions to some of these research challenges. As mentioned, Cloud computing provides outsourcing of resources; the outsourcing does not enable a data owner to outsource the responsibility of confidentiality, integrity and access control as it remains the responsibility of the data owner. The challenge of providing confidentiality, integrity and access control of data hosted on Cloud platforms is not catered for by traditional access control models. These models were developed over the course of many decades to fulfil the requirements of organisations which assumed full control over the physical infrastructure of the resources they control access to. The assumption is that the data owner, data controller and administrator are present in the same trusted domain. This assumption does not hold for the Cloud computing paradigm. Risk management of data present on the Cloud is another challenge. There is a requirement to identify the risks an organisation would be taking while hosting data and services on the Cloud. Furthermore, the identification of risk would be the first step, the next step would be to develop the mitigation strategies. As part of the thesis, two main areas of research are targeted: distributed access control and security risk management.
66

Anomaly detection and prediction in communication networks using wavelet transforms

Alarcon Aquino, Vicente January 2003 (has links)
No description available.
67

EXCLAIM framework : a monitoring and analysis framework to support self-governance in Cloud Application Platforms

Dautov, Rustem January 2015 (has links)
The Platform-as-a-Service segment of Cloud Computing has been steadily growing over the past several years, with more and more software developers opting for cloud platforms as convenient ecosystems for developing, deploying, testing and maintaining their software. Such cloud platforms also play an important role in delivering an easily-accessible Internet of Services. They provide rich support for software development, and, following the principles of Service-Oriented Computing, offer their subscribers a wide selection of pre-existing, reliable and reusable basic services, available through a common platform marketplace and ready to be seamlessly integrated into users' applications. Such cloud ecosystems are becoming increasingly dynamic and complex, and one of the major challenges faced by cloud providers is to develop appropriate scalable and extensible mechanisms for governance and control based on run-time monitoring and analysis of (extreme amounts of) raw heterogeneous data. In this thesis we address this important research question -- how can we support self-governance in cloud platforms delivering the Internet of Services in the presence of large amounts of heterogeneous and rapidly changing data? To address this research question and demonstrate our approach, we have created the Extensible Cloud Monitoring and Analysis (EXCLAIM) framework for service-based cloud platforms. The main idea underpinning our approach is to encode monitored heterogeneous data using Semantic Web languages, which then enables us to integrate these semantically enriched observation streams with static ontological knowledge and to apply intelligent reasoning. This has allowed us to create an extensible, modular, and declaratively defined architecture for performing run-time data monitoring and analysis with a view to detecting critical situations within cloud platforms. By addressing the main research question, our approach contributes to the domain of Cloud Computing, and in particular to the area of autonomic and self-managing capabilities of service-based cloud platforms. Our main contributions include the approach itself, which allows monitoring and analysing heterogeneous data in an extensible and scalable manner, the prototype of the EXCLAIM framework, and the Cloud Sensor Ontology. Our research also contributes to the state of the art in Software Engineering by demonstrating how existing techniques from several fields (i.e., Autonomic Computing, Service-Oriented Computing, Stream Processing, Semantic Sensor Web, and Big Data) can be combined in a novel way to create an extensible, scalable, modular, and declaratively defined monitoring and analysis solution.
68

Service testing for the 'Internet of Things'

Reetz, Eike S. January 2016 (has links)
Services that represent sensor and actuator nodes, together with service orchestration, aid in overcoming the heterogeneous structure of the Internet of Things (IoT). Interconnecting different sensor and actuator nodes and exposing them as services is a complex topic which is even more demanding for testing. Further effort is needed to enable common and effcient methodologies for testing IoT-based services. IoT-based services differ from web services since they usually interact with the physical environment via sensor and actuator nodes. This changes how testing can be performed. An open research question is thereby how to apply Model-Based Testing (MBT) approaches for facilitating scalable and ef cient test automation. This thesis introduces a novel test framework to facilitate functional evaluation of IoT- based services based on MBT methodologies. The concept separates the service logic from connected sensor and actuator nodes in a sandbox environment. Furthermore, a new IoT service behaviour model is designed for representing relevant characteristics of IoT-based services and ensuring the automated emulation of sensor nodes. The IoT-behaviour model proves to be automatically transformable into executable Test Cases (TCs). As a proof of concept, the automated test approach is prototypically implemented as a novel test tool. The execution of the TCs reveals, that crucial failures, such as unexpected messages, data types, or data values, can be detected during test execution. Deriving tests from a test model typically result in huge number of TCs, which cannot be executed within a reasonable time and with limited resources. To enhance the diversity of executed TCs, similarity investigation algorithms are proposed and validated. The results show that the proposed Diversity-based Steady State Genetic algorithm can outperform existing solutions up to 11.6 % with less computation time. With regard to verifying the failure detection rate, experiments show that the proposed Group Greedy algorithm can enhance the rate up to 29 %.
69

An artistic perspective on distributed computer networks : creativity in human-machine systems

Gapsevicius, Mindaugas January 2016 (has links)
This thesis is written from an artistic perspective as a reflection on currently significant discussions in media theory, with a focus on the impact of technology on society. While mapping boundaries of contemporary art, post-digital art is considered the best for describing current discourses in media theory in the context of this research. Bringing into the discussion artworks by Martin Howse & Jonathan Kemp (2001-2008), Maurizio Bolognini (Bolognini 1988-present), and myself (mi_ga 2006), among many others, this research defines post-digital art, which in turn defines a complexity of interactions between elements of different natures, such as the living and non-living, human and machine, art and science. Within the analysis of P2P networks, I highlight Milgram's (1967) idea of six degrees of separation, which, at least from a speculative point of view, is interesting for the implementation of human-machine concepts in future technological developments. From this perspective, I argue that computer networks could, in the future, have more potential for merging with society if developed similarly to the computer routing scheme implemented in the Freenet distributed information storage and retrieval system. The thesis then describes my own artwork, 0.30402944246776265, including two newly developed plugins for the Freenet storage system; the first plugin is constructed to fulfill the idea of interacting elements of different natures (in this case, the WWW and Freenet), while the other plugin attempts to visualize data flow within the Freenet storage and retrieval system. All together, this paper proposes that a reconsideration of distributed and self-organized information systems, through an artistic and philosophical lens, can open up a space for the rethinking of the current integration of society and technology.
70

Provenance-driven diagnostic framework for task evictions mitigating strategy in cloud computing

Albatli, Abdulaziz Mohammed N. January 2017 (has links)
Cloud computing is an evolving paradigm. It delivers virtualized, scalable and elastic resources (e.g. CPU, memory) over a network (e.g. Internet) from data centres to users (e.g. individuals, enterprises, governments). Applications, platforms, and infrastructures are Cloud services that users can access. Clouds enable users to run highly complex operations to satisfy computation needs through resource virtualization. Virtualization is a method to run a number of virtual machines (VM) on a single physical server. However, VMs are not a necessity in the Clouds. Cloud providers tend to overcommit resources, aiming to leverage unused capacity and maximize profits. This over-commitment of resources can lead to an overload of the actual physical machine, which lowers the performance or lead to the failure of tasks due to lack of resources, i.e. CPU or RAM, and consequently lead to SLA violations. There are a number of different strategies to mitigate the overload, one of which is VM task eviction. The ambition of this research is to adapt a provenance model, PROV, to help understand the historical usage of a Cloud system and the components contributed to the overload, so that the causes for task eviction can be identified for future prevention. A novel provenance-driven diagnostic framework is proposed. By studying Google’s 29-day Cloud dataset, the PROV model was extended to PROV-TE that underpinned a number of diagnostic algorithms for identifying evicted tasks due to specific causes. The framework was implemented and tested against the Google dataset. To further evaluate the framework, a simulation tool, SEED, was used to replicate task eviction behaviour with the specifications of Google Cloud and Amazon EC2. The framework, specifically the diagnostic algorithms, was then applied to audit the causes and to identify the relevant evicted tasks. The results were then analysed using precision and recall measures. The average precision and recall of the diagnostic algorithms are 83% and 90%, respectively.

Page generated in 0.047 seconds